Why does casting Double.NaN to int not throw an exception in Java?

What is the rationale for not throwing exceptions in these cases? Is this an IEEE standard, or was it merely a choice by the designers of Java?

The IEEE 754-1985 Standard in pages 20 and 21 under the sections 2.2.1 NANs and 2.2.2 Infinity clearly explains the reasons why NAN and Infinity values are required by the standard. Therefore this not a Java thing.

The Java Virtual Machine Specification in section 3.8.1 Floating Point Arithmetic and IEEE 754 states that when conversions to integral types are carried out then the JVM will apply rounding toward zero which explains the results you are seeing.

The standard does mention a feature named "trap handler" that could be used to determine when overflow or NAN occurs but the Java Virtual Machine Specification clearly states this is not implemented for Java. It says in section 3.8.1:

The floating-point operations of the Java virtual machine do not throw exceptions, trap, or otherwise signal the IEEE 754 exceptional conditions of invalid operation, division by zero, overflow, underflow, or inexact. The Java virtual machine has no signaling NaN value.

So, the behavior is not unspecified regardless of consequences.

Are there bad consequences that I am unaware of if exceptions would be possible with such casts?

Understanding the reasons stated in standard should suffice to answer this question. The standard explains with exhaustive examples the consequences you are asking for here. I would post them, but that would be too much information here and the examples can be impossible to format appropriately in this edition tool.

EDIT

I was reading the latest maintenance review of the Java Virtual Machine Specification as published recently by the JCP as part of their work on JSR 924 and in the section 2.11.14 named type conversion istructions contains some more information that could help you in your quest for answers, not yet what you are looking for, but I believe it helps a bit. It says:

In a narrowing numeric conversion of a floating-point value to an integral type T, where T is either int or long, the floating-point value is converted as follows:

  • If the floating-point value is NaN, the result of the conversion is an
    int or  long 0.
  • Otherwise, if the floating-point value is not an infinity, the
    floating-point value  is rounded to
    an integer value V using IEEE 754
    round towards zero mode.

There are two cases:

  • If T is long and this integer value can be represented as a long, then
    the result is the long value V.
  • If T is of type int and this integer value can be represented as an int, then the result is the int value V.

Otherwise:

  • Either the value must be too small (a negative value of large magnitude or  negative infinity), and the result is the smallest representable value of type  int or long.
  • Or the value must be too large (a positive value of large magnitude or
    posi- tive infinity), and the result
    is the largest representable value of type int or  long.

A narrowing numeric conversion from double to float behaves in accordance with IEEE 754. The result is correctly rounded using IEEE 754 round to nearest mode. A value too small to be represented as a float is converted to a positive or negative zero of type float; a value too large to be represented as a float is converted to a positive or negative infinity. A double NaN is always converted to a float NaN.

Despite the fact that overflow, underflow, or loss of precision may occur, narrowing conversions among numeric types never cause the Java virtual machine to throw a runtime exception (not to be confused with an IEEE 754 floating-point exception).

I know this simply restates what you already know, but it has a clue, it appears that the IEEE standard has a requirement of rounding to the nearest. Perhaps there you can find the reasons for this behavior.

EDIT

The IEEE Standard in question in section 2.3.2 Rounding Modes States:

By default, rounding means round toward the nearest. The standard requires that three other rounding modes be provided; namely, round toward 0, round toward +Infinity and round toward –Infinity.

When used with the convert to integer operation, round toward –Infinity causes the convert to become the floor function, whereas, round toward +Infinity is ceiling.

The mode rounding affects overflow because when round toward O or round toward -Infinite is in effect, an overflow of positive magnitude causes the default result to be the largest representable number, not +Infinity.

Similarly, overflows of negative magnitude will produce the largest negative number when round toward +Infinity or round toward O is in effect.

Then they proceed to mention an example of why this is useful in interval arithmetic. Not sure, again, that this is the answer you are looking for, but it can enrich your search.


It is in the JLS, see: JavaRanch post http://www.coderanch.com/t/239753/java-programmer-SCJP/certification/cast-double-int However a warning will be nice.


There is an ACM presentation from 1998 that still seems surprisingly current and brings some light: https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf.

More concretely, regarding the surprising lack of exceptions when casting NaNs and infinities: see page 3, point 3: "Infinities and NaNs unleashed without the protection of floating-point traps and flags mandated by IEEE Standards 754/854 belie Java’s claim to robustness."

The presentation doesn't really answer the "why's", but does explain the consequences of the problematic design decisions in the Java language's implementation of floating point, and puts them in the context of the IEEE standards and even other implementations.


What is the rationale for not throwing exceptions in these cases?

I imagine that the reasons include:

  • These are edge cases, and are likely to occur rarely in applications that do this kind of thing.

  • The behavior is not "totally unexpected".

  • When an application casts from a double to an int, significant loss of information is expected. The application is either going to ignore this possibility, or the cast will be preceded by checks to guard against it ... which could also check for these cases.

  • No other double / float operations result in exceptions, and (IMO) it would be a bit schizophrenic to do it in this case.

  • There could possibly be a performance hit ... on some hardware platforms (current or future).

A commentator said this:

"I suspect the decision to not have the conversion throw an exception was motivated by a strong desire to avoid throwing exceptions for any reasons, for fear of forcing code to add it to a throws clause."

I don't think that is a plausible explanation:

  • The Java language designers1 don't have a mindset of avoiding throwing exceptions "for any reason". There are numerous examples in the Java APIs that demonstrate this.

  • The issue of the throws clause is addressed by making the exception unchecked. Indeed, many related exceptions like ArithmeticException or ClassCastException are declared as unchecked for this reason.

Is this an IEEE standard, or was it merely a choice by the designers of Java?

The latter, I think.

Are there bad consequences that I am unaware of if exceptions would be possible with such casts?

None apart from the obvious ones ...

(But it is not really relevant. The JLS and JVM spec say what they say, and changing them would be liable to break existing code. And it is not just Java code we are talking about now ...)


I've done a bit of digging. A lot of the x86 instructions that could be used convert from double to integers seem to generate hardware interrupts ... unless masked. It is not clear (to me) whether the specified Java behavior is easier or harder to implement than the alternative suggested by the OP.


1 - I don't dispute that some Java programmers do think this way. But they were / are not the Java designers, and this question is asking specifically about the Java design rationale.