# Testing for Integer Overflow in C and C++

**Update**: Thanks to Mihai Rusu who discovered that my “correct”
functions had a bug in them; I have updated them to fix the bug.

**Update2**: Thanks to Kevin Bailey who discovered that my final
functions had a redundant comparison in them. I have updated them to
remove it.

**Update2**: Thanks to Ami Fischman who pointed me to this code in the
Chromium tree that has implemented this functionality in a way that
appears to avoid all of the compiler warnings I was getting: src/base/safe_numerics.h.

Integer overflow (and underflow – I’ll lump them together) is one of those pesky things that creeps up in the real world and makes low-level software a little less clean and elegant than what you might see in an algorithms textbook. Checking for overflow is one of those things that distinguishes production-quality software from toy code. If you program in C or C++, it’s something you should always be aware of, especially since it can be a security issue.

If you’re written much low-level software that deals with buffers or offsets, you’ve probably already come across this pattern for testing if an addition will overflow:

If you compile the first example in optimized mode and look at its output, you will see that the function has been optimized to simply return 0 without looking at its arguments at all. Our attempt to check for overflow has been completely defeated. But the second version works as intended. Here the overflow had an easy fix.

The fix is not so easy if we want to test whether a given value can be directly converted to a given data type without overflow. In fact this problem is surprisingly hard. But let’s start with a problem statement. I’ll write it in C++ because templates are convenient, but the issue applies equally to C:

You already know the naive solution isn’t going to work, so let’s just get it out of the way:

The theory here is that if we can round-trip to the destination type and back, then the destination type can represent our target value. But alas, our test program fails!

What happened? Well it turns out that on my platform, `-1`

can
round-trip to `unsigned`

just fine. I’m on a two’s complement system
and converting the integer `-1`

to `unsigned`

yields `0xFFFFFFFF`

(also known as `UINT_MAX`

), which becomes `-1`

when converted back to
`int`

. But just because the value could *round-trip* to my
destination type doesn’t mean it has the *correct value* in my
destination type. So our first attempt is unsuccessful.

(Note that although this example uses C++ and templates, the lesson
equally applies to an attempt in C to test `(int)(unsigned)x == x`

).

If round-tripping isn’t a good test, perhaps we can explicitly test that our value is inside the range of our destination type.

But this does no better; it fails on exactly the same assertion, but
this time for a different reason. In our failure case the comparisons
are mixed-type comparisons, meaning that the operands are of different
types (in this case `int`

and `unsigned`

). How does C++ handle
mixed-type comparisons? The hardware can only really compare two
values of the same type, so which type do both operands get converted
to before the comparison happens?

This is decided by what the C and C++ standards call the *usual
arithmetic conversions*. The usual arithmetic conversions are more
complicated than I want to inaccurately summarize here, but in this
case they say that `-1`

is converted to `unsigned`

before being
compared in the `unsigned`

domain. So naturally the resulting value
will be in range for the `unsigned`

data type.

The “usual arithmetic conversions” are confounding our attempts to ask
questions about the original value. So we need to avoid them by
ensuring that both sides of our comparison are the same type. But
there is no numeric type in C or C++ that can represent all integral
values (no type can represent both `LLONG_MIN`

and `ULLONG_MAX`

), so
what domain can we use for our range comparisons?

You might be tempted to reach for two’s complement tricks like testing the sign bit of signed types, but resist the urge: the standard does not guarantee two’s complement arithmetic, and besides, the real solution is more principled anyway:

The types `intmax_t`

and `uintmax_t`

are defined in C99 and C++11 as
types capable of representing any value of any signed/unsigned integer
type, respectively. So if we do our comparisons using those types, we
should be able to avoid getting overflow before the comparison is even
performed. By casting both sides of the expression to the same type,
we avoid the usual arithmetic conversions. But we still have to
handle the case where we are converting a signed type to an unsigned
one or vice-versa. If we are converting `int x`

to an unsigned type,
we need to check whether `x < 0`

first, before doing our `uintmax_t`

conversions, because once we convert a negative number to `uintmax_t`

it will become a large signed number and appear in range.

So finally we have a solution that works and passes our initial test cases. But the implementation still has one small wart; it throws warnings like the following:

$ clang -std=c++0x -o overflow overflow.ccoverflow.cc:14:16: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]return val < 0 || ~~~ ^ ~

This check exists for when we are converting from a signed type; we want and expect it to be a no-op when converting from an unsigned type. But clang doesn’t know we know this so it warns us anyway. How can we make this warning go away? The warning is thrown even if the comparison is unreachable, so changing the expression to explicitly check whether the type is signed first does not help:

It seems that the only solution to this is to use template partial
specialization to ensure that that the `val < 0`

comparison is only
generated for signed types. And functions cannot be partially
specialized (only classes) so we need to use a functor. Getting
around this warning unfortunately bloats the code significantly, but
the solution below does work and avoids the warning.