## Thursday, December 27, 2012

### Testing for Integer Overflow in C and C++

Update: Thanks to Mihai Rusu who discovered that my "correct" functions had a bug in them; I have updated them to fix the bug.

Update2: Thanks to Kevin Bailey who discovered that my final functions had a redundant comparison in them. I have updated them to remove it.

Update2: Thanks to Ami Fischman who pointed me to this code in the Chromium tree that has implemented this functionality in a way that appears to avoid all of the compiler warnings I was getting: src/base/safe_numerics.h.

Integer overflow (and underflow -- I'll lump them together) is one of those pesky things that creeps up in the real world and makes low-level software a little less clean and elegant than what you might see in an algorithms textbook. Checking for overflow is one of those things that distinguishes production-quality software from toy code. If you program in C or C++, it's something you should always be aware of, especially since it can be a security issue.

If you're written much low-level software that deals with buffers or offsets, you've probably already come across this pattern for testing if an addition will overflow:
bool WillOverflow_BAD(uintmax_t x, int add) {
// XXX -- doesn't work!
// Overflow causes wraparound and always returns false
return x + add > UINTMAX_MAX;
}

bool WillOverflow_GOOD(uintmax_t x, int add) {
// This won't overflow because "x" can't be greater than UINTMAX_MAX
return UINTMAX_MAX - x < add;
}

If you compile the first example in optimized mode and look at its output, you will see that the function has been optimized to simply return 0 without looking at its arguments at all. Our attempt to check for overflow has been completely defeated. But the second version works as intended. Here the overflow had an easy fix.

The fix is not so easy if we want to test whether a given value can be directly converted to a given data type without overflow. In fact this problem is surprisingly hard. But let's start with a problem statement. I'll write it in C++ because templates are convenient, but the issue applies equally to C:
#include <assert.h>
#include <limits.h>

// This is the function we want: it returns true if
// converting "val" to type "To" will overflow or underflow.
template<typename To, typename From>
bool WillOverflow(From val);

int main() {
assert(!WillOverflow<int>(0));
assert(!WillOverflow<unsigned>(0));
assert(WillOverflow<unsigned>(-1));
assert(WillOverflow<unsigned>(-1LL));
assert(!WillOverflow<long>(-1));
assert(WillOverflow<long>(ULONG_MAX));
assert(!WillOverflow<unsigned long>(ULONG_MAX));
assert(WillOverflow<unsigned long>(LONG_MIN));
assert(!WillOverflow<long>(LONG_MAX));
return 0;
}
You already know the naive solution isn't going to work, so let's just get it out of the way:
template<typename To, typename From>
bool WillOverflow(From val) {
// DOESN'T WORK (see below).
return (From)(To)val != val;
}
The theory here is that if we can round-trip to the destination type and back, then the destination type can represent our target value. But alas, our test program fails!
$./overflow overflow: overflow.cc:17: int main(): Assertion WillOverflow<unsigned>(-1)' failed. Aborted (core dumped)  What happened? Well it turns out that on my platform, -1 can round-trip to unsigned just fine. I'm on a two's complement system and converting the integer -1 to unsigned yields 0xFFFFFFFF (also known as UINT_MAX), which becomes -1 when converted back to int. But just because the value could round-trip to my destination type doesn't mean it has the correct value in my destination type. So our first attempt is unsuccessful. (Note that although this example uses C++ and templates, the lesson equally applies to an attempt in C to test (int)(unsigned)x == x). If round-tripping isn't a good test, perhaps we can explicitly test that our value is inside the range of our destination type. template<typename To, typename From> bool WillOverflow(From val) { // DOESN'T WORK (see below). return val < std::numeric_limits<To>::min() || val > std::numeric_limits<To>::max(); } But this does no better; it fails on exactly the same assertion, but this time for a different reason. In our failure case the comparisons are mixed-type comparisons, meaning that the operands are of different types (in this case int and unsigned). How does C++ handle mixed-type comparisons? The hardware can only really compare two values of the same type, so which type do both operands get converted to before the comparison happens? This is decided by what the C and C++ standards call the usual arithmetic conversions. The usual arithmetic conversions are more complicated than I want to inaccurately summarize here, but in this case they say that -1 is converted to unsigned before being compared in the unsigned domain. So naturally the resulting value will be in range for the unsigned data type. The "usual arithmetic conversions" are confounding our attempts to ask questions about the original value. So we need to avoid them by ensuring that both sides of our comparison are the same type. But there is no numeric type in C or C++ that can represent all integral values (no type can represent both LLONG_MIN and ULLONG_MAX), so what domain can we use for our range comparisons? You might be tempted to reach for two's complement tricks like testing the sign bit of signed types, but resist the urge: the standard does not guarantee two's complement arithmetic, and besides, the real solution is more principled anyway: #include <limits> #include <cstdint> // Requires c++11, needed for intmax_t/uintmax_t. template<typename To, typename From> bool WillOverflow(From val) { assert(std::numeric_limits<From>::is_integer); assert(std::numeric_limits<To>::is_integer); if (std::numeric_limits<To>::is_signed) { return (!std::numeric_limits<From>::is_signed && (uintmax_t)val > (uintmax_t)INTMAX_MAX) || (intmax_t)val < (intmax_t)std::numeric_limits<To>::min() || (intmax_t)val > (intmax_t)std::numeric_limits<To>::max(); } else { return val < 0 || (uintmax_t)val > (uintmax_t)std::numeric_limits<To>::max(); } } The types intmax_t and uintmax_t are defined in C99 and C++11 as types capable of representing any value of any signed/unsigned integer type, respectively. So if we do our comparisons using those types, we should be able to avoid getting overflow before the comparison is even performed. By casting both sides of the expression to the same type, we avoid the usual arithmetic conversions. But we still have to handle the case where we are converting a signed type to an unsigned one or vice-versa. If we are converting int x to an unsigned type, we need to check whether x < 0 first, before doing our uintmax_t conversions, because once we convert a negative number to uintmax_t it will become a large signed number and appear in range. So finally we have a solution that works and passes our initial test cases. But the implementation still has one small wart; it throws warnings like the following: $ clang -std=c++0x -o overflow overflow.cc
overflow.cc:14:16: warning: comparison of unsigned expression < 0 is always false
[-Wtautological-compare]
return val < 0 ||
~~~ ^ ~

This check exists for when we are converting from a signed type; we want and expect it to be a no-op when converting from an unsigned type. But clang doesn't know we know this so it warns us anyway. How can we make this warning go away? The warning is thrown even if the comparison is unreachable, so changing the expression to explicitly check whether the type is signed first does not help:
    // Doesn't help -- still throws a warning:
return (std::numeric_limits<From>::is_signed && val < 0) ||

It seems that the only solution to this is to use template partial specialization to ensure that that the val < 0 comparison is only generated for signed types. And functions cannot be partially specialized (only classes) so we need to use a functor. Getting around this warning unfortunately bloats the code significantly, but the solution below does work and avoids the warning.
#include <limits>
#include <cstdint>
#include <assert.h>

template<bool is_signed, typename T>
class IsNegativeFunctor;

template<typename T>
class IsNegativeFunctor<true, T> {
public:
bool operator()(T x) {
return x < 0;
}
};

template<typename T>
class IsNegativeFunctor<false, T> {
public:
bool operator()(T x) {
// Unsigned type is never negative.
return false;
}
};

template<typename T>
bool IsNegative(T x) {
return IsNegativeFunctor<std::numeric_limits<T>::is_signed, T>()(x);
}

template<typename To, typename From>
bool WillOverflow(From val) {
assert(std::numeric_limits<From>::is_integer);
assert(std::numeric_limits<To>::is_integer);
if (std::numeric_limits<To>::is_signed) {
return (!std::numeric_limits<From>::is_signed &&
(uintmax_t)val > (uintmax_t)INTMAX_MAX) ||
(intmax_t)val < (intmax_t)std::numeric_limits<To>::min() ||
(intmax_t)val > (intmax_t)std::numeric_limits<To>::max();
} else {
return IsNegative(val) ||
(uintmax_t)val > (uintmax_t)std::numeric_limits<To>::max();
}
}`

1. I would also mention Boost Numeric Conversion Library which offers by far the simplest and most efficient solution to this problem.

#include "boost/numeric/conversion/converter.hpp"

template <typename To, typename From>
bool WillOverflow(From val) {
return boost::numeric::converter<To, From>::out_of_range(val)
== boost::numeric::cInRange;
}

1. Thanks for the mention -- this would indeed be simpler if you were already using Boost or didn't mind taking on an extra dependency.

But what evidence do you have that Boost is more efficient than my solution? I tried compiling both and found that with clang my solution generated significantly less code and avoided any branches, whereas the Boost solution generated several branches. It appeared that my solution was more efficient, not less (though I did not actually benchmark it).

With gcc the result was more of a wash: my solution generated about one branch per test but the Boost solution generated one "cmov" instruction per test.

It's interesting to me that the two compilers generated such different code in this case. Might be worth another blog entry.

convertur.C:47:33: instantiated from here
convertur.C:38:67: warning: comparison is always false due to limited range of data type [-Wtype-limits]

It is:

(intmax_t)val > (intmax_t)std::numeric_limits::max()

called with

WillOverflow(0);

GCC is too smart for some trivial casts.

1. Wow, it looks like it might take a lot of partial specialization to get rid of all the warnings then.

3. is IsNegative any better than the simple 'val < 0'?

1. 'val < 0' can throw compiler warnings on unsigned types -- this is explained earlier in the blog post.