What numerical algorithm is simplified by defining sqrt(-0.0) as -0.0?

It was defined in the official floating point standard in 1985 (IEEE std. 754-1985) that sqrt(-0.0) = -0.0.

The 2008 revision of the same standard added a definition of the pow function. According to this definition, pow(x,y) can have a negative sign only if y is an odd integer. Hence, pow(-0.0, 3.0) = -0.0. While pow(-0.0, 0.5) = +0.0. In 2008, it was too late to change the definition of sqrt(-0.0) and therefore we have the unfortunate situation that the two functions give different results.

The sign of zero generally doesn't matter since zero and negative zero are equal. But it matters when you divide by it. So 1/sqrt(-0.0) gives -INF, while pow(-0.0,-0.5) gives +INF.

The decision of 1985 was probably just an observation of status quo. The Intel math coprocessor 8087 from 1980 had sqrt implemented in hardware and it gave sqrt(-0.0) = -0.0. Today, all PC processors have sqrt implemented in hardware, so it would be very difficult to change the standard. The problem is not so important that it is worthwhile making two different sqrt functions that differ only for negative zero. I don't know anything about the history prior to 1980. If anybody can trace the history further back please post a comment here.