Sunday, December 9, 2012

How to correctly take square root in programming

If we simply call "sqrt()" function to take square root, very often we'll be caught by surprise.

The first common mistake is to not check negativity before taking square root. For example, Instead of getting 0.0 in both cases, one of the case will yield "-nan". This is because y is actually not exactly \(\sqrt{2}\), but within machine precision of \(\sqrt{2}\). Therefore, we should always check negativity before taking square root.

You may notice that, in one of the above two cases, even if the answer is not "-nan", the accuracy is quite poor, only within \(10^{-8}\) of zero, rather than within machine precision of zero. This is because although \(y^2 - 2\) is within machine precision zero, after taking square root, you'll only get square root of machine precision. This is another common mistake.

So how do we take square root accurately and error free? Assume you want to compute \(x = \sqrt{2 - y^2}\), you can do something like the following