C# Brainteaser arithmetic

Computers are meant to be good at arithmetic, aren't they? Why does this print "False"?

double d1 = 1.000001;
double d2 = 0.000001;

Answer: All the values here are stored as binary floating point. While 1.0 can be stored exactly, 1.000001 is actually stored as 1.0000009999999999177333620536956004798412322998046875, and 0.000001 is actually stored as 0.000000999999999999999954748111825886258685613938723690807819366455078125. The difference between them isn't exactly 1.0, and in fact the difference can't be stored exactly either