Why am I losing precision while converting float32 to float64?

This helped me understand @FaceyMcFaceFace's answer:

var a float32 = math.Pi
fmt.Println(a)
fmt.Println(float64(a))
fmt.Println(float64(math.Pi))

3.1415927
3.1415927410125732
3.141592653589793

https://play.golang.org/p/-bAQLqjlLG


You never lose precision when converting from a float (i.e. float32) to a double (float64). The former must be a subset of the latter.

It's more to do with the defaulting precision of the output formatter.

The nearest IEEE754 float to 359.9 is

359.899993896484375

The nearest IEEE754 double to 359.9 is

359.8999999999999772626324556767940521240234375

The nearest IEEE754 double to 359.899993896484375 is

359.899993896484375

(i.e. is the same; due to the subsetting rule I've already mentioned).

So you can see that float64(a) is the same as float64(359.899993896484375) which is 359.899993896484375. This explains that output, although your formatter is rounding off the final 2 digits.