Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

x86 can trap on int overflow? I hadn't heard that before, I should look into it, but is there some special little-known trick? I'm not an x86 wiz obviously, but I thought I had basic familiarity with it.


It was removed from amd64 because it was not actually useful in practice.

I.e., sometimes you might want the check, but other times you certainly don't, and turning it off and on at the right times is too hard to get right. And, the ability to turn it on, even when it was never used, slowed everything down.

You would anyway need to be able express the choice in the languages used. If you had such a way, a compiler could insert extra instructions to check or to suppress checking where it is not wanted, but making such a choice wherever arithmetic happens would be a big burden to programmers already just barely hanging on.


It was not removed because it was not useful. Generating exceptions on overflows is extremely useful.

It was removed because it was seldom used by Microsoft and by the other large companies which mattered, so the AMD and Intel designers took advantage of this bad habit of many programmers, in order to simplify their CPU design work.

It was seldom used because the programmers have always preferred to obtain the highest speed even with the risk of erroneous computations in seldom cases.

The reason is that a lower speed is noticed immediately, possibly leading to a lost sale, while occasional errors may be discovered after a long time and even when they are discovered, the programmers who have made this choice are not punished proportionally with the losses that might have been caused to the users, which in most cases are difficult to quantify anyway.

A normal policy is to always check for integer overflow, out-of-bounds accesses and so on. Such checks are provided by all decent compilers.

Only when performance problems are identified and if it is certain that there is no danger in omitting the checks, the checks should be omitted in the functions where this matters.

Selecting whether run-time checks must be done or not is very easy. You just need to use the appropriate sets of compiler flags for the 2 kinds of functions, those with enabled checks and those with disabled checks.

It is sad that the C tradition has imposed that the default compiler flags are with disabled checks, but that is not an acceptable excuse. Any C/C++ programmer should always enable the checks, unless there are good reasons to disable them.

If they do not do this, it is completely their fault and not the fault of the programming languages.

However the C/C++ standard libraries have the problem that they lack a good standard way of handling such exceptions (i.e. a better way than the UNIX signals). That means that if you do not want to abort the program in case of overflows or out-of-bounds accesses, it can be difficult to ensure a recovery after errors.


You may insist on this, but the fact that it does not exist in any convenient form leaches strength from your argument.

In fact harmless overflow is extremely common, and relied upon. Trapping by default, while it would would call attention to some bugs, would also turn many working programs into crashing programs.

Saturating arithmetic would sometimes produce better results.


I believe that you confuse integer overflow, which applies only to the signed integers of C/C++ with the arithmetic operations on modular numbers, which are improperly called "unsigned integers" in C/C++.

When computing with modular numbers, you rely on the exact modular operations. That is not overflow.

There exists no "harmless overflow". The overflow of "signed integers" is an undefined operation in the C/C++ standard.

The actual behavior of integer overflow is determined by the compiling options. With any decent compiler, you may choose between 3 behaviors: trap on overflow, which aborts the program unless it is caught, display an error message and continue the execution with an undefined result of the overflown operation (typically with the purpose of also seeing other error mesages), or ignore the overflow and continue the execution with an undefined result of the overflown operation (the default option).

Relying in any way upon what happens on overflows (of "signed integers"), when choosing a compiling option that does not trap on overflow, is an unacceptable mistake, because what the program does is unpredictable.

Relying on what modular arithmetic does with "unsigned numbers" is correct, except that in most cases they are not really used as modular numbers, but the programmers expect that modular reduction will never happen. When it happens and it was not expected, it may cause similar problems like overflows, even if the result could have been predicted.

You are right that saturating arithmetic is the main alternative to trapping on overflow. It can be used in the same way as infinities are used with floating-point numbers when the overflow exception is masked.

Unfortunately, saturating arithmetic does not exist in the standard C/C++ languages. Therefore in C/C++ you may choose only between trapping on overflow and dangerous undefined behavior.


Saturating on overflow as bad as the 2s complement wraparound on unsigned if you were not expecting/relying on it. C lacks a type where unsigned overflow is an error or UB. Having it UB is acceptable because then you can choose a compiler option (-ftrapv) that traps it. But, I have heard this is unreliable, e.g. what happens if the overflow happens in library code?

Ada does this correctly, and lets you choose between exception and wraparound. UB is not an option in the language spec, though you can turn off the checks as an optimization option in GNAT, or maybe turn it off at specific lines of code with a pragma.


As far as I can find an integer overflow just sets the overflow flag and you need an explicit INTO instruction to check for the flag and run the trap handler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: