Wednesday, May 20, 2015

Status update on upb and Ruby Protocol Buffers

It has been a long time since I have posted any updates about upb, my implementation of Protocol Buffers. In 2011 I posted the article upb status and preliminary performance numbers, showing that upb's JIT compiler yielded impressive performance results parsing Protocol Buffers, particularly when used as a stream parser. Since then there has been a lot of activity in GitHub, but still no releases, and I haven't posted much about upb or where it's going. It's time for a status update

On a personal note, I also have exciting news to report: as of last August, I have joined the Protocol Buffers team at Google officially. After five years of being very interested in Protocol Buffers and working on upb as my 20% project, I now work on Protocol Buffers full time (and still work on upb 20%).

As has always been the case, on this blog I speak only for myself: not for my team, or for Google!

But onto upb status updates. A lot has happened since I last wrote. There have been hundreds of commits, and lots of changes, big and small. Obviously I won't cover everything, but I'll start with big-picture news, then get a little more specific, then talk vision for a bit.

haberman/upb is now google/upb

Up until this week, upb's main repository lived on GitHub under my own username. As of two days ago, its home is under the Google organization. As a 20% project, upb has always been under Google copyright. Living under the Google organization will make it easier for Google to ensure that it follows policies surrounding CLA (contributor license agreements) and such.

This also makes upb a bit more "official" as an open-source offering of Google. This will enable Google to incorporate upb into other projects/products, which brings me to my next bit of news.

Google is using upb for the MRI Ruby Protocol Buffers implementation

Though the code has been public on GitHub for months now, I'm very happy to formally announce that the official Google-supported Ruby library for protobufs (on MRI) uses upb for parsing and serialization! The Ruby-specific code was written by my excellent teammate Chris Fallin. The data representation (ie. the Ruby classes representing protobufs) is all Ruby-specific, but under the hood it uses upb for all parsing and serialization.

One of upb's goals from the beginning was to be an ideal implementation for dynamic languages like Ruby, Python, PHP, etc, so it is extremely gratifying to finally see this in action.

When we were evaluating how to implement the Ruby Protocol Buffers library, one of major questions was whether to implement it in pure Ruby or to use a C extension. Staying pure Ruby has notable portability benefits, but at the end of the day the performance benefit of a C extension approach was too great to pass up. The micro-benchmark numbers of our extension vs. a couple of pure-Ruby implementations make this clear:
Test payload is a 45-byte protobuf

Ruby Beefcake    (pure-Ruby):                0.7MB/s
Ruby Protobuf    (pure-Ruby):                0.8MB/s
Google Protobuf  (C accelerated with upb):  11.9MB/s
Representative microbenchmarking is notoriously difficult, but the point of this is just to show that C acceleration when it comes to parsing makes a big difference. That is a big part of why I created upb to begin with.

For a lot of use cases, the difference doesn't matter. Lots of people who make remote API calls just need to parse a modestly-sized payload, and it's not in a tight loop or anything. These people probably won't care about the perf difference between Beefcake and Google Protobuf.

But there always end up being people who care. When the payload gets big enough, or the loop gets tight enough, a 10x difference is huge. We see this at Google all the time. And just witness the amount of effort that people have put into optimizing JSON parsing -- for example, the oj (Optimized JSON) library for Ruby. We wanted to be sure that the protobuf library would be at least speed-competitive with the (pretty fast) built-in JSON library that comes with Ruby. And in that we have succeeded -- in my one silly test, we are roughly speed-competitive with the JSON, (a little faster or a little slower, depending on how you measure).

This doesn't help JRuby users unfortunately, but I am very happy that open-source contributors have come forward to implement an API-compatible version of the library for JRuby by wrapping the Protocol Buffers Java implementation. So both MRI and JRuby can get fast parsers, and code should be portable between the two. The cost is having to maintain two implementations under the hood.

By the way, the Ruby Protocol Buffers library uses upb, but it doesn't use upb's JIT at all for parsing. The numbers above reflect upb's pure-C, bytecode-based decoder. There is no point in enabling the JIT, because the actual parsing represents such a small portion of the overall CPU cost of parsing from Ruby -- upb by itself can parse at more than 10x the speed of the Ruby extension, even with the pure C parser. The vast majority of the time is spent in the Ruby interpreter creating/destroying objects, and other Ruby-imposed overhead. Enabling the JIT would barely even be a measurable difference. This is nice because avoiding the JIT here means we avoid the portability constraints of the JIT.

JSON support

The Ruby implementation of Protocol Buffers is part of a larger effort known as "proto3" (see here and here). I won't go too deeply into what proto3 is about because that would be its own blog entry. But one important element of the proto3 story is that JSON support is becoming first-class. Protobuf binary format and JSON format are both options for sending payloads. If you want a human-readable payload, send JSON -- proto3 libraries will support both.

Since Ruby-Protobuf uses upb for all of its parsing/serialization, that must mean that upb now supports JSON, and indeed it does!

Everything in upb is stream-based; this means that upb supports JSON<->protobuf transcodes in a totally streaming fashion. Internally the protobuf encoder has to buffer submessages, since Protocol Buffer binary format specifies that submessages on the wire are prefixed by their length. But from an API perspective, it is all streaming. (It would be possible to write a version of the encoder that avoids the internal buffering by doing two passes, but that is an exercise for a different day).

Protocol Buffers and JSON, next the world?

I had an epiphany at some point, which was that upb could be a more generalized parsing framework, not limited to just Protocol Buffers.

upb was inspired by SAX, the Simple API for XML. The basic idea of SAX is that you have a parser that calls callbacks instead of building a data structure directly. This model makes for fast and flexible parsers, and has been imitated by other parsers such as YAJL (for JSON).

So upb supports parsing protobufs by letting you register handler functions for every protobuf field. Parsing field A calls handler A, parsing field B calls handler B, and so on.

But what if we could apply this basic idea to the use case of SAX? What if upb could parse XML, and call handlers that are specific to the XML data model? We could use .proto files to define an XML-specific schema. For SAX this might look like:
// Protocol buffer schema representing SAX
message StartElement {
  string element_name = 1;
  map<string, string> attributes = 2;

message EndElement {
  string element_name = 1;

message ProcessingInstruction {
  string target = 1;
  string data = 2;

message XMLContent {
  oneof content {
    StartElement start = 1;
    EndElement end = 2;
    bytes characters = 3;
    bytes ignorable_whitespace = 4;
    string skipped_entity = 5;
    ProcessingInstruction processing_instruction = 6;

message XMLDocument {
  repeated XMLContent content = 1;
Now we can offer basically the same API as SAX, except the set of handlers is defined as a protobuf schema, instead of being hard-coded into the library.

What if we try the same idea with JSON?
message JsonArray {
  repeated JsonValue value = 1;

message JsonObject {
  map<string, JsonValue> properties = 1;

message JsonValue {
  oneof value {
    bool is_null = 1;
    bool boolean_value = 2;
    string string_value = 3;

    // Multiple options for numbers, since JSON doesn't specify
    // range/precision for numbers.
    double double_value = 4;
    int64  int64_value = 5;
    string number_value = 6;

    JsonObject object_value = 7;
    JsonArray array_value = 8;
Now we can offer the same API as YAJL.

Or take HTTP, according to the handlers specified in http-parser
message HTTPHead {
  uint32 http_major = 1;
  uint32 http_minor = 2;
  uint32 status_code = 3;
  enum Method {
    DELETE = 1;
    GET = 2;
    HEAD = 3;
    PUT = 4;
    POST = 5;
    // ...

  Method method = 4;
  string url = 5;

  map<string, string> headers = 6;

message HTTPBody {
  repeated bytes chunk = 1;

message HTTP {
  HTTPHead head = 1;
  HTTPBody body = 2;
Do you see where this is going? SAX, YAJL, and http-parser are great APIs. upb seeks to be a generalization of streaming parser APIs, by using a Protocol Buffer schema to define the set of handlers for each format

You may be tempted to say "bloat." But extending upb in this way doesn't make the core any bigger than if it just supported Protocol Buffers. The core library already contains all the functionality for registering handlers to match a protobuf schema. We aren't growing the core library at all to enable these other use cases.

And coming from the other direction, the core library of upb is only about 25kb of object code. It just contains data structures for representing schemas and handlers. For many libraries, like libxml or expat, 25kb is not a very noticeable overhead. For some exceptionally small libraries like YAJL or http-parser, 25kb would be a noticeable overhead, but in most cases the overhead is acceptable.

And what does this abstraction buy you? An easy way to expose your parser to multiple languages.

Take Ruby. Since the Google Protocol Buffers library for Ruby is implemented using upb, all of the handlers already exist to read and write Protocol Buffers objects. So if you had an HTTP parser that used upb handlers according to the handler schema above, it would be trivial to parse some HTTP into an HTTP Protocol Buffers object in Ruby. You can use the Protocol Buffer object as your parse tree / in-memory representation! Once you do that you can easily serialize it as a binary Protocol Buffer, or as JSON, if you want to preserve its parsed form. You could easily write a filter program in Ruby that iterates over a bunch of HTTP sessions and pulls out the URL. You have all these capabilities for how you can use the parser in Ruby, but you've had to do barely any Ruby-specific work to expose the parser to Ruby.

If you keep going down this path, you realize that this same idea could work for streaming string->string transformations (like gzip or SSL), and even for streaming aggregations like HyperLogLog. And the ultimate dream is that this all can compose nicely.

This is the vision which I am constantly working towards. But I'm also grounded by the need to deliver tangible things like something that Ruby and other language implementations can use right now.

Back to Planet Earth: Concrete Status

All of those lofty ideas are nice, but the devil is in the details. Here is the status of some of those details, in super-brief form:
  1. protobuf encoder and decoder are fully complete (except for vending unknown fields, which I'd like to add at some point)
  2. protobuf decoder (both pure-C and JIT) are completely rewritten. Pure-C decoder is now bytecode-based, and JIT is a code generator from that bytecode. Both decoders are fully-resumable, for when the input spans multiple buffers.
  3. typed JSON parser and printer are pretty complete, following the JSON encoding conventions of proto3. "Typed JSON" means JSON that follows the structure of a protobuf schema.
  4. upb now has a stable ABI; structure members and sizes are hidden in implementation files.
  5. upb now supports fully-injectable memory allocation and error callbacks for parse-time structures. (Sharable, ahead-of-time structures like schemas and handlers still rely on a global malloc()).
  6. there is now an "amalgamation" build where all upb sources are collapsed into a single .h/.c file pair, to make it easy to drop into another project's build (the Ruby extension uses this
But there is still a lot to do:
  1. need to split generic JSON encoder from typed JSON encoder. (Generic JSON encoder will use a schema like the above to represent arbitrary JSON, like YAJL, instead of being limited to a structured schema).
  2. need to port to c89. This is a tough one for me. The title of this blog is "parsing, performance, minimalism with c99." But practical experience is just making it clear that we're still in a c89 world if you want maximum portability (and particularly the ability to integrate into other build systems).
  3. need to tweak the handlers model to be even more composable. For example, it should be possible for the JSON encoder/decoder to internally use a generic upb base64 encoder/decoder to put binary data into string fields. It should also be easier to compose pipeline elements. We're not quite to that level of composability yet.
  4. need to implement a parser for .proto files, so we can parse them directly. Right now we can only load binary descriptors.
  5. Expand to more formats! XML, HTTP, CSV, LevelDB log format, YAML, and more!
  6. My most lofty vision, resurrect Gazelle, my parser generator, as an easy way of generating fast parsers that target upb natively. Think yacc/Bison/ANTLR, but more convenient, because you'll be able to easily use Protocol Buffer in-memory objects as your parse tree / AST.
So there is still a lot to do, but it's exciting to see it really working for Ruby already.

The name

As you can see from what I described above, upb is outgrowing its name, since it's no longer just about Protocol Buffers. I've been playing with calling it "Unleaded" instead, as a backronym for upb (get it?), but I can't seem to get this to stick.

Wednesday, October 29, 2014

The Speed of Python, Ruby, and Lua's Parsers

I've run into some cases lately where loading a lot of generated code in Python takes a lot of time. When I profiled my test case the hotspot seemed to be the parser itself -- that is, Python's parser that parses .py source files.

I realized that I had no idea how fast the parsers for some of these languages are. As someone who is interested in parsers, this piqued my interest. I was particularly interested to see how much precompiling would help.

To satisfy my curiosity, I wrote a quick little benchmark that tries to get some rough numbers on this.

These are the results I got on my machine:
python               real 0m1.521s user 0m1.184s sys 0m0.328s
ruby                 real 0m0.523s user 0m0.441s sys 0m0.076s
lua                  real 0m0.131s user 0m0.124s sys 0m0.005s
python (precompiled) real 0m0.022s user 0m0.012s sys 0m0.009s
lua (precompiled)    real 0m0.005s user 0m0.002s sys 0m0.003s

Version Information:
Python 2.7.5
ruby 2.1.3p242 (2014-09-19 revision 47630) [x86_64-darwin13.0]
Lua 5.2.3  Copyright (C) 1994-2013, PUC-Rio
My takeaways from this are:
  1. There is a surprising amount of variation here. Python's parser probably has a lot of room for optimization. But the fact that precompiling is available probably reduces the demand for this considerably.
  2. Precompiling helps a lot.
  3. Lua continues to impress.

Friday, October 17, 2014

The overhead of abstraction in C/C++ vs. Python/Ruby

I've been working on some Python and Ruby libraries lately that wrap C extensions. An interesting and important observation came to me as I was doing some of the design.

Please note this is not meant to be any kind of commentary of the relative value between these languages. It's a specific observation that is useful when you are crossing the language barrier and deciding the boundary between what should go in the high-level language vs what should go in C or C++.

The observation I made is that C and C++ compilers can inline, whereas interpreters for Python and Ruby generally do not.

This may seem like a mundane observation, but what it means is that building abstractions in the high-level language has a noticeable cost, where in C and C++ simple abstractions built around function calls are basically free.

To illustrate, take this Python program:
total = 0

for i in range(1000000):
  total += i

print total
Now suppose we want to abstract this a bit (this is a toy example, but mirrors the structure of real abstractions):
total = 0

class Adder:
  def __init__(self): = 0

  def add(self, i): += i

adder = Adder()

for i in range(1000000):
On my machine, the second example is less than half the speed of the first. (The same is true of Ruby when I tried equivalent programs).
$ time python 

real 0m0.158s
user 0m0.133s
sys     0m0.023s
$ time python 

real 0m0.396s
user 0m0.367s
sys     0m0.024s
Compare this with the equivalent first program in C++ (I used "volatile" to prevent the compiler from being too smart and collapsing the loop completely):
#include <stdio.h>

int main() {
  volatile long total = 0;

  for (long i = 0; i < 100000000; i++) {
    total += i;

  printf("%ld\n", total);
And the version with the adder abstracted into a class:
#include <stdio.h>
class Adder {
  Adder() : total(0) {}
  void add(long i) { total += i; }
  volatile long total;

int main() {
  Adder adder;

  for (long i = 0; i < 100000000; i++) {

On my machine, not only do they take the same amount of time, they compile into literally exactly the same machine code.

We already know that Python and Ruby are noticeably slower than C and C++ (again, not a dig, the two serve different purposes), which suggests that performance-critical code should go in C or C++. But the extra observation here is that any layers or abstractions in Python or Ruby have an inherent cost, whereas in C or C++ you can layer abstractions much more freely without fear of additional overhead, particularly for functions or classes in a single source file.

Monday, September 15, 2014

What every computer programmer should know about floating point, part 1

The subject of floating-point numbers can strike vague uncertainty into all but the hardiest of programmers. The first time a programmer gets bitten by the fact that 0.1 + 0.2 is not quite equal to 0.3, the whole thing can seem like an inscrutable mess where nothing behaves like it should.

But lying amidst all of this seeming insanity are a lot of things that make perfect sense if you think about them in the right way. There is an existing article called What Every Computer Scientist Should Know About Floating-Point Arithmetic, but it is very math-heavy and focuses on subtle issues that face data scientists and CPU designers. This article ("What every computer programmer should know...) is aimed at the general population of programmers. I'm focusing on simple and practical results that you can use to build your intuition for how to think about floating-point numbers.

As a practical guide I'm concerning myself only with the IEEE 754 floating point formats single (float) and double that are implemented on current CPUs and that most programmers will come into contact with, and not other topics like decimal floating point, arbitrary precision, etc. Also my goal is to build intuition and show the shapes of things, not prove theorems, so my math may not be fully precise all the time. That said, I don't want to be misleading, so please let me know of any material errors!

Articles like this one are often written in a style that is designed to make you question everything you thought you knew about the subject, but I want to do the opposite: I want to give you confidence that floating-point numbers actually make sense. So to kick things off, I'm going to start with some good news.

Integers are exact! As long as they're not too big.

It's true that 0.1 + 0.2 != 0.3. But this lack of exactness does not apply to integer values! As long as they are small enough, floating point numbers can represent integers exactly.
1.0 == integer(1) (exactly)
5.0 == integer(5) (exactly)
2.0 == integer(2) (exactly)
This exactness also extends to operations over integer values:
1.0 + 2.0 == 3.0 (exactly)
5.0 - 1.0 == 4.0 (exactly)
2.0 * 3.0 == 6.0 (exactly)
Mathematical operations like these will give you exact results as long as all of the values are integers smaller than \(2^{53}\) (for double) or \(2^{24}\) (for float).

So if you're in a language like JavaScript that has no integer types (all numbers are double-precision floating point), and you have an application that wants to do precise integer arithmetic, you can treat JS numbers as 53-bit integers, and everything will be perfectly exact. Though of course if you do something inherently non-integral, like 8.0 / 7.0, this exactness guarantee doesn't apply.

And what if you exceed \(2^{53}\) for a double, or \(2^{24}\) for a float? Will that give you strange dreaded numbers like 16777220.99999999 when you really wanted 16777221?

No — again for integers the news is much less dire. Between \(2^{24}\) and \(2^{25}\) a float can exactly represent half of the integers: specifically the even integers. So any mathematical operation that would have resulted in an odd number in this range will instead be rounded to one of the even numbers around it. But the result will still be an integer.

For example, let's add:
    16,777,216 (2^24)
  +          5
    16,777,221 (exact result)
    16,777,220 (rounded to nearest representable float)
You can generally think of floating point operations this way. It's as if they computed exactly the correct answer with infinite precision, but then rounded the result to the nearest representable value. It's not implemented this way of course (putting infinite precision arithmetic in silicon would be expensive), but the results are generally the same as if it had.

We can also represent this concept visually, using a number line:

The green line represents the addition and the red line represents the rounding to the nearest representable value. The tick marks above the number line indicate which numbers are representable and which are not; because these values are in the range \([2^{24}, 2^{25}]\), only the even numbers are representable as float.

This model can also explain why adding two numbers that differ wildly in magnitude can make the smaller one get lost completely:
  +          0.0001
    16,777,216.0001 (exact result)
    16,777.216      (rounded to nearest representable float)
Or in the number line model:

The smaller number was not nearly big enough to get close to the next largest representable value (16777218), so the rounding caused the smaller value to get lost completely.

This rounding behavior also explains the answer to question number 4 in Ridiculous Fish's excellent article Will It Optimize? It's tempting to have floating-point anxiety and think that transforming (float)x * 2.0f into (float)x + (float)x must be imprecise somehow, but in fact it's perfectly safe. The same rule applies as our previous examples: compute the exact result with infinite precision and then round to the nearest representable number. Since the x + x and x * 2 are mathematically exactly the same, they will also get rounded to exactly the same value.

So far we've discovered that a float can represent:
  • all integers \([0, 2^{24}]\) exactly
  • half of integers \([2^{24}, 2^{25}]\) exactly (the even ones)

Why is this? Why do things change at \(2^{24}\)?

It turns out that this is part of a bigger pattern, which is that floating-point numbers are more precise the closer they are to zero. We can visualize this pattern again with a number line. This illustration isn't a real floating-point format (it has only two bits of precision, much less than float or double) but it follows the same pattern as real floating-point formats:

This diagram gets to the essence of the relationship between floating point values and integers. Up to a certain point (4 in this case), there are multiple floating point values per integer, representing numbers between the integers. Then at a certain point (here between 4 and 8) the set of floating point and integer values are the same. Once you get larger than that, the floating point values skip some integer values.

We can diagram this relationship to get a better sense and intuition for what numbers floats can represent compared to integers:

This plot is just a continuation of what we've said already. The green dots are boring and only appear for reference: they are saying that no matter how large or small your values are for an integer representation like int32, they can represent exactly one value per integer. That's a complicated way of saying that integer representations exactly represent the integers.

But where it gets interesting is when we compare integers to floats, which appear as red dots. The green and red dots intersect at \(2^{24}\); we've already identified this as the largest value for which floats can represent every integer. If we go larger than this, to \(2^{25}\), then floats can represent half of all integers, (\(2^{-1}\) on the graph), which again is what we have said already.

The graph shows that the trend continues in both directions. For values in the range \([2^{25}, 2^{26}]\), floats can represent 1/4 of all integers (the ones divisible by 4). And if we go smaller, in the range \([2^{23}, 2^{24}]\), floats can represent 2 values per integer. This means that in addition to the integers themselves, a float can represent one value in between each integer, that being \(x.5\) for any integer \(x\).

So the closer you get to zero, the more values a float can stuff between consecutive integers. If you extrapolate this all the way to 1, we see that float can represent \(2^{23}\) unique values between 1 and 2. (Between 0 and 1 the story is more complicated).

Range and Precision

I want to revisit this diagram from before, which depicts a floating-point representation with two bits of precision:

A useful observation in this diagram is that there are always 4 floating-point values between consecutive powers of two. For each increasing power of two, the number of integers doubles but the number of floating-point values is constant.

This is also true for float (\(2^{23}\) values per power of two) and double (\(2^{52}\) values per power of two). For any two powers-of-two that are in range, there will always be a constant number of values in between them.

This gets to the heart of how range and precision work for floating-point values. The concepts of range and precision can be applied to any numeric type; comparing and contrasting how integers and floating-point values differ with respect to range and precision will give us a deep intuition for how floating-point works.

Range/precision for integers and fixed-point numbers

For an integer format, the range and precision are straightforward. Given an integer format with \(n\) bits:
  • every value is precise to the nearest integer, regardless of the magnitude of the value.
  • range is always \(2^{n}\) between the highest and lowest value (for unsigned types the lowest value is 0 and for signed types the lowest value is \(-(2^{n-1})\)).
If we depict this visually, it looks something like:

If you ever come across fixed point math, for example the fixed-point support in the Allegro game programming library, fixed point has a similar range/precision analysis as integers. Fixed-point is a numerical representation similar to integers, except that each value is multiplied by a constant scaling factor to get its true value. For example, for a 1/16 scaling factor:

integersequivalent fixed point value
11 * 1/16 = 0.0625
22 * 1/16 = 0.125
33 * 1/16 = 0.1875
44 * 1/16 = 0.25
1616 * 1/16 = 1

Like integers, fixed point values have a constant precision regardless of magnitude. But instead of a constant precision of 1, the precision is based on the scaling factor. Here is a visual depiction of a 32-bit fixed point value that uses a 1/16 (\(1/2^{4}\)) scaling factor. Compared with a 32-bit integer, it has 16x the precision, but only 1/16 the range:

The fixed-point scaling factor is usually a fractional power of two in (ie. \(1/2^{n}\) for some \(n\)), since this makes it possible to use simple bit shifts for conversion. In this case we can say that \(n\) bits of the value are dedicated to the fraction.

The more bits you spend on the integer part, the greater the range. The more bits you spend on the fractional part, the greater the precision. We can graph this relationship: given a scaling factor, what is the resulting range and precision?

Looking at the first value on the left, for scaling factor \(2^{-16}\) (ie. dedicating 16 bits to the fraction), we get a precision of \(2^{16}\) values per integer, but a range of only \(2^{16}\). Increasing the scaling factor increases the range but decreases the precision.

At scaling factor \(2^{0} = 1\) where the two lines meet, the precision is 1 value per integer and the range is \(2^{32}\) — this is exactly the same as a regular 32-bit integer. In this way, you can think of regular integer types as a generalization of fixed point. And we can even use positive scaling factors: for example with a scaling factor of 2, we can double the range but can only represent half the integers in that range (the even integers).

The key takeaway from our analysis of integers and fixed point is that we can trade off range and precision, but given a scaling factor the precision is always constant, regardless of how big or small the values are.

Range/precision for floating-point numbers

Like fixed-point, floating-point representations let you trade-off range and precision. But unlike fixed point or integers, the precision is proportional to the size of the value.

Floating-point numbers divide the representation into the exponent and the significand (the latter is also called the mantissa or coefficient). The number of bits dedicated to the exponent dictates the range, and the number of bits dedicated to the significand determines the precision.

We will discuss the precise meanings of the exponent and significand in the next installment, but for now we will just discuss the general patterns of range and precision.

Range works a little bit differently in floating-point than in fixed point or integers. Have you ever noticed that FLT_MIN and DBL_MIN in C are not negative numbers like INT_MIN and LONG_MIN? Instead they are very small positive numbers:
#define FLT_MIN     1.17549435E-38F
#define DBL_MIN     2.2250738585072014E-308
Why is this?

The answer is that floating point numbers, because they are based on exponents, can never actually reach zero or negative numbers "natively". Every time you decrease the exponent you get closer to zero but you can never actually reach it. So the smallest number you can reach is FLT_MIN for float and DBL_MIN for double. (denormalized numbers can go smaller, but they are considered special-case and are not always enabled. FLT_MIN and DBL_MIN are the smallest normalized numbers.)

You may protest that float and double can clearly represent zero and negative numbers, and this is true, but only because they are special-cased. There is a sign bit that indicates a negative number when set.

And when the exponent and significand are both zero, this is special-cased to be the value zero. (If the exponent is zero but the significand is non-zero, this is a denormalized number; a special topic for another day.)

Put these two special cases together and you can see why positive zero and negative zero are two distinct values (though they compare equal).

Because floating-point numbers are based on exponents, and can never truly reach zero, the range is defined not as an absolute number, but as a ratio between the largest and smallest representable value. That range ratio is entirely determined by the number of bits alloted to the exponent.

If there are \(n\) bits in the exponent, the ratio of the largest to the smallest value is roughly \(2^{2^{n}}\). Because the \(n\)-bit number can represent \(2^{n}\) distinct values, and since those values are themselves exponents we raise 2 to that value.

We can use this formula to determine that float has a range ratio of roughly \(2^{256}\), and double has a range ratio of roughly \(2^{2048}\). (In practice the ranges are not quite this big, because IEEE floating point reserves a few exponents for zero and NaN).

This alone doesn't say what the largest and smallest values actually are, because the format designer gets to choose what the smallest value is. If FLT_MIN had been chosen as \(2^0\ = 1\), then the largest representable value would be \(2^{256} \approx 10^{77}\).

But instead FLT_MIN was chosen as \(2^{-126} \approx 10^{-37}\), and FLT_MAX is \(\approx 2^{128} \approx 3.4 \times 10^{38}\). This gives a true range ratio of \(\approx 2^{254}\), which roughly lines up with our previous analysis that yielded \(2^{256}\) (reality is a bit smaller because two exponents are stolen for special cases: zero and NaN/infinity).

What about precision? We have said several times that the precision of a floating-point value is proportional to its magnitude. So instead of saying that the number is precise to the nearest integer (like we do for integer formats), we say that a floating-point value is precise to \(X\%\) of its value. Using our sample from before of an imaginary floating point format with a two-bit significand, we can see:

So at the low end of each power of two, the precision is always 25% of the value. And at the high end it looks more like:

So for a two-bit significand, the precision is always between 12.5% and 25% of the value. We can generalize this and say that for an \(n\)-bit significand, the precision is between \(1/2^{n}\) and \(1/(2^{n+1})\) of the value (ie. between \(\frac{100}{2^{n}}\%\) and \(\frac{100}{2^{n+1}}\%\) of the value. But since \(1/2^{n}\) is the worst case, we'll talk about that because that's the figure you can count on.

We have finally explored enough to be able to fully compare/contrast fixed-point and integer values with floating point!

fixed point
scalar (high - low)
\(2^{n} \times \text{scaling factor}\)
equal to the scaling factor
floating point
ratio (high / low)
relative (X%)
\(\frac{100}{2^{n}} \%\) (worst case)

If we apply these formulas to single-precision floating point vs. 32-bit unsigned integers, we get:

floating point
\(2^{256} / 1\)
0.00001% (worst case)

Practical trade-offs between fixed/floating point

Let's step back for a second and contemplate what all this really means, for us humans here in real life as opposed to abstract-math-land.

Say you're representing lengths in kilometers. If you choose a 32-bit integer, the shortest length you can measure is 1 kilometer, and the longest length you can measure is 4,294,967,296 km (measured from the Sun this is somewhere between Neptune and Pluto).

On the other hand, if you choose a single-precision float, the shortest length you can measure is \(10^{-26}\) nanometers — a length so small that a single atom's radius is \(10^{24}\) times greater. And the longest length you can measure is \(10^{25}\) light years.

The float's range is almost unimaginably wider than the int32. And what's more, the float is also more accurate until we reach the magic inflection point of \(2^{24}\) that we have mentioned several times in this article.

So if you choose int32 over float, you are giving up an unimaginable amount of range, and precision in the range \([0, 2^{24}]\), all to get better precision in the range \([2^{24}, 2^{32}]\). In other words, the int32's sole benefit is that it lets you talk about distances greater than 16 million km to kilometer precision. But how many instruments are even that accurate?

So why does anyone use fixed point or integer representations?

To turn things around, think about time_t. time_t is a type defined to represent the number of seconds since the epoch of 1970-01-01 00:00 UTC. It has traditionally been defined as a 32-bit signed integer (which means that it will overflow in the year 2038). Imagine that a 32-bit single-precision float had been chosen instead.

With a float time_t, there would be no overflow until the year 5395141535403007094485264579465 AD, long after the Sun has swallowed up the Earth as a Red Giant, and turned into a Black Dwarf. However! With this scheme the granularity of timekeeping would get worse and worse the farther we got from 1970. Unlike the int32 which gives second granularity all the way until 2038, with a float time_t we would already in 2014 be down to a precision of 128 seconds — far too coarse to be useful.

So clearly floating point and fixed point / integers all have a place. Integers are still ideal for when you are counting things, like iterations of a loop, or for situations like a time counter where you really do want a constant precision over its range. Integer results can also be more predictable since the precision doesn't vary based on magnitude. For example, integers will always hold the identity x + 1 - 1 == x, as long as x doesn't overflow. The same can't be said for floating point.


There is more still to cover, but this article has grown too long already. I hope this has helped build your intuition for how floating point numbers work. In the next article(s) in the series, we'll cover: the precise way in which the value is calculated from exponent and significand, fractional floating point numbers, and the subtleties of printing floating-point numbers.

Sunday, June 22, 2014

Beware of Lua finalizers in C modules!

If you write Lua C modules, there is a corner case surrounding finalizers (ie. __gc metamethods) that you should be aware of. I haven't run into a public and prominent warning about this, so I decided to write it up in this entry. I only discovered it myself through trial and error.

Lua provides the __gc metamethod for releasing resources associated with a userdata. It is commonly used in C modules to free any C-level objects or resources that were allocated by a type.
#include "lauxlib.h"
#include "mytype.h"

#if LUA_VERSION_NUM == 501
#define setfuncs(L, l) luaL_register(L, NULL, l)
#define luaL_setmetatable(L, name) \
  luaL_getmetatable(L, name); \
  lua_setmetatable(L, -2)
#elif LUA_VERSION_NUM == 502
#define setfuncs(L, l) luaL_setfuncs(L, l, 0)

static const char *MYTYPE = "mytype";

typedef struct {
  mytype_t *val;
} MyTypeWrapper;

static int newobj(lua_State *L) {
  MyTypeWrapper *obj = lua_newuserdata(L, sizeof(MyTypeWrapper));
  obj->val = mytype_new();
  luaL_setmetatable(L, MYTYPE);
  return 1;

static int call(lua_State *L) {
  MyTypeWrapper *obj = lual_checkudata(L, 1, MYTYPE);
  return 0;

static int gc(lua_State *L) {
  MyTypeWrapper *obj = lual_checkudata(L, 1, MYTYPE);
  return 0;

static const struct luaL_Reg mm[] = {
  {"__gc", gc},
  {"__call", call},

int luaopen_ext(lua_State *L) {
  luaL_newmetatable(L, mytype);
  setfuncs(L, mm);
  lua_pushcfunction(L, &newobj);
  return 1;
It turns out this will work nearly all of the time. But there is one very unusual corner case that can reliably cause gc() to run before call()! This program can trigger this behavior on both Lua 5.1 and Lua 5.2 (and LuaJIT):
local ext = require "ext"

if _VERSION >= 'Lua 5.2' then
  function defer(fn)
    setmetatable({}, { __gc = fn })
  function defer(fn)
    getmetatable(newproxy(true)).__gc = fn

local y = {}
defer(function() y[1]() end)
y[1] = ext()
Basically, any Lua code that runs inside a __gc metamethod can get access to a userdata that has already been finalized! This can crash your C extension if you don't handle this case.

There are two main solutions to this problem:
  1. Clear the userdata's metatable inside the finalizer. This will ensure that the userdata fails any subsequent luaL_checkudata() check later. The downside is that the error message for trying to call a method on a finalized value will be very unhelpful (something like "attempt to index field '?' (a user data value)")
  2. Set a "finalized" flag inside the finalizer, and check this flag right after calling luaL_checkudata(). For example, you could set the pointer to NULL and check for this. This gives you the benefit of being able to return a custom error message like "attempted to call into dead object."
Here is an example of what the second solution might look like:
mytype *mytype_check(lua_State *L, int narg) {
  mytype *obj = luaL_checkudata(L, narg, MYTYPE);
  if (!obj->val) luaL_error(L, "called into dead object");
  return obj;

static int call(lua_State *L) {
  MyTypeWrapper *obj = mytype_check(L, 1);
  return 0;

static int gc(lua_State *L) {
  mytype *obj = mytype_check(L, 1);
  // The critical step that will prevent us from allowing
  // a call into a dead object.
  obj->val = NULL;
  return 0;
For a bit more discussion about this, see this lua-l thread where I raised the issue.

The moral of the story is: anytime you are using __gc from a C module, you need to handle the case where the finalizer gets called before other methods. Otherwise a user could SEGV your module.

Saturday, February 22, 2014

On "The Future of JavaScript MVC Frameworks"

This entry is a sort of "part 2" to my previous entry React Demystified. Now that I understand React better, I want to take a closer look at the blog post that motivated this quest to begin with, The Future of JavaScript MVC Frameworks. That entry advocates a vision for what MVC's of the future will look like in JavaScript, and is accompanied by some strong benchmark numbers.

The article's overall argument is that the design/architecture of most JavaScript MVC libraries today makes them slow. While they can be optimized to improve performance, a much better approach (according to the article) is to change the design of the MVC library to something that is inherently faster; a design where the optimizations fall out of the design "for free". A design that is fast by default. This is the claim that I want to take a deeper look at and evaluate in detail.

I want to go deeper because the form of the argument -- the idea that a fundamentally different design can render a lot of busy-work obsolete -- is one that resonates with me. But there are a lot of aspects of this that are all mashed together a bit in the article, so I want to break them apart and look at them one by one.

React/Om vs. Backbone.js benchmarks

The article begins with some benchmarks from TodoMVC, a very cool web site that creates the same little TODO application in a ton of different JavaScript MVC frameworks. This is not only a great body of example code, but it has benchmarks for doing performance comparisons between the frameworks.

The article starts by noting that the React/Om library (which the author also wrote) is 2-4x faster than Backbone in "Benchmark 1" of TodoMVC, and about 800x faster than Backbone for Benchmark 2. This is followed with a profiling graph that shows Backbone making a ton of short-running function calls, where React/Om make many fewer calls that run longer.

From what I can tell, it appears that these benchmark numbers are mainly illustrating the difference between doing direct DOM updates (as in the Backbone example) and using a library like React that batches DOM updates. I suspect that these benchmarks have little to do with Om and everything to do with React.

In particular, "Benchmark 2" (the one with the 800x speedup vs Backbone) is effectively a no-op on the DOM, so none of Om's custom shouldComponentUpdate() optimizations are coming into play here. The only thing that Om seems to be possibly contributing performance-wise to this benchmark is that it uses requestAnimationFrame() instead of the default React batching strategy, but this can be done easily with plain React too: here is a github project to do it. Unfortunately the React example for TodoMVC doesn't implement the benchmarks, but if it did I suspect the performance would be almost identical to the React/Om numbers.

The author addresses this point a moment later:
Of course you can use Backbone.js or your favorite JS MVC with React, and that's a great combo that delivers a lot of value. However, I'll go out on a limb and say I simply don't believe in event-oriented MVC systems - the flame graph above says it all. Decoupling the models from the views is only the first important step.
The argument here is that even if you use React with Backbone, Backbone is still based around change events (ie. a "push model"). Using Backbone together with React means calling React's forceUpdate() whenever a Backbone change handler fires. The author argues that the flame graph from before illustrates that making lots of little function calls whenever there is a change event is slow.

I'm not sure I buy this argument. The flame graph from before is significant because it shows lots of DOM updates. It illustrates that Backbone performs all of its DOM updates eagerly, and DOM updates are known to be one of the biggest application bottlenecks in JavaScript applications. React is fast because it aggressively batches and minimizes its DOM updates. Having Backbone fire a bunch of change handlers that call React's forceUpdate() function a lot isn't slow, because React will still wait a while to actually re-render everything and update the DOM.

Om and Immutable Data Structures

The next section describes a lot of points that are more Om-specific. Om is an immutable data structure library: a structure is never mutated once created, which means that the contents of the entire tree are captured in the identity of the object root. Or in simpler terms, if you are passed an object that is the same object (by identity) that you saw earlier, you can be assured that it hasn't changed in the meantime. This pattern has a lot of nice properties, but also generates more garbage than mutable objects.

React/Om takes advantage of this by implementing React's shouldComponentUpdate() call to optimize away render() calls (and related diff-ing) when the objects are the same. When objects are the same they are known to have the same value, which is how we know that the optimization is safe. This is particularly important for React/Om, because unlike React with mutability (ie. setState()) React/Om completely refreshes the tree from the root every time. So React/Om's shouldComponentUpdate() will return true for the root and for any paths to nodes which have changed, but can prune away diffs for any subtrees that have not changed.

I think it's worthwhile to note that React/Om does not necessarily do less work than if you used React with setState(). If you called setState() on an interior node of the virtual DOM, you would only trigger a re-render of that one component. But with React/Om, you will trigger a re-render of all components between the root and the changed component. The final DOM update will be the same in both cases, but React will have had to do more work to discover that. It's a small amount more work (and in other cases the setState() approach will require more work, possibly a lot more if you're careless), but it was an interesting and unexpected that the immutable approach isn't a strict improvement over the mutable one.

The article goes on to describe several other benefits of the functional-style immutable data structures. The biggest one as I see it is "undo for free" -- this is an undeniably powerful pattern of immutable data structures.


The article's broad claims that current JavaScript MVCs are inherently slow made a big impression on me. But when I examined the article's substance in more depth, I was not fully convinced of this. To me most of the claimed improvements boil down to: updating the DOM is slow, so use a framework that batches updates to it. React does this, and I have heard that Angular does batching to some extent too. After looking at this, I'm not convinced that immutable data structures are inherently better-performing in the browser than mutable ones. And it is hard to imagine convenient two-way data binding being built on top of immutable data structures.

That said, there is a lot to like about immutable data structures, particularly their snapshot/undo capabilities.

React Demystified

This entry will be a bit of a departure from the usual content of this blog, which is mostly about parsing and low-level programming. Lately I've had some interest in JavaScript frameworks, including Facebook's React. Some recent articles I have read, particularly The Future of JavaScript MVC Frameworks, have convinced me that there are some deep and powerful ideas in React, but none of the articles or documentation I could find explained the core abstractions in a way that satisfied me. Much like my previous article LL and LR Parsing Demystified, this article is an attempt to explain the core ideas in a way that makes sense to me.

The 1000-Foot View

In a traditional web app, you interact extensively with the DOM, usually using jQuery:

I made the DOM red because updating the DOM is expensive. Now sometimes the "App" will have model classes that it uses internally to represent state, but for our purposes that is an implementation detail that is internal to the app.

React's primary goal is to provide a different and more efficient way of performing DOM updates. Instead of mutating the DOM directly, your app builds a "virtual DOM", and React handles updating the real DOM to match:

How does introducing an extra layer make things faster? Doesn't that imply that the browsers have sub-optimal DOM implementations, if adding a layer on top of them can speed them up?

It would mean that, except that the virtual DOM has different semantics than the real DOM. Most notably, changes to the virtual DOM are not guaranteed to take effect immediately. This allows React to wait until the end of its event loop before it even touches the real DOM at all. At that point it calculates a nearly-minimal diff and applies it to the real DOM in as few steps as possible.

Batching DOM updates and applying minimal diffs are things that an application could do on its own. Any application that did this would be as efficient as React. But doing this manually is tedious and error-prone. React handles that for you.


I mentioned that the virtual DOM has different semantics than the real DOM, but it also has a noticeably different API. The nodes in the DOM tree are elements, but the nodes of the virtual DOM are a completely different abstraction called components.

The use of components is very important to React, because components are designed to make calculating the DOM diff much more efficient than the O(n^3) that the fully general tree-diff algorithm would cost.

To find out why, we'll have to dig in to the design of components a bit. Let's take the React "Hello, World" example from their front page:
/** @jsx React.DOM */
var HelloMessage = React.createClass({
  render: function() {
    return <div>Hello {}</div>;

React.renderComponent(<HelloMessage name="John" />, mountNode);
There is an awful lot going on here that isn't entirely explained. Even this short example illustrates some big ideas, so I'm going to take some time here and go slow.

This example creates a React component class "HelloMessage", then creates a virtual DOM with one component (<HelloMessage>, essentially an "instance" of the HelloMessage class) and "mounts" it onto the real DOM element mountNode.

The first thing to notice is that the React virtual DOM is made up of custom, application-defined components (in this case <HelloMessage>). This is a significant departure from the real DOM where all of the elements are browser built-ins like <p>, <ul>, etc. The real DOM carries no application-specific logic; it is just a passive data structure that lets you attach event handlers. The React virtual DOM, on the other hand, is built from application-specific components that can carry application-specific APIs and internal logic. This is more than a DOM-updating library; it is a new abstraction and framework for building views.

As a side note: If you've been keeping up with all things HTML you may know that HTML custom elements may be coming to browsers soon. This will bring to the real DOM a similar capability: defining application-specific DOM elements with their own logic. But React has no need to wait for official custom elements because the virtual DOM isn't a real DOM. This allows it to jump the gun and integrate features similar to custom elements and Shadow DOM before browsers add those features to the real DOM.

Getting back to our example, we have established that it creates a component called <HelloMessage> and "mounts" it on mountNode. I want to diagram this initial situation in a couple of ways. First let's visualize the relationship between the virtual DOM and the real DOM. Let's assume that mountNode is the document's <body> tag:

The arrow indicates that the virtual element is mounted on the real DOM element, which we'll see in action shortly. But let's also take a look at the logical illustration of our application's view right now:

That is to say, our entire web page's content is represented by our custom component <HelloMessage>. But what does a <HelloMessage> look like?

The rendering of a component is defined by its render() function. React does not say exactly when or how often it will call render(), only that it will call it often enough to notice valid changes. Whatever you return from your render() method represents how your view should look in the real browser DOM.

In our case, render() returns a <div> with some content in it. React calls our render() function, gets the <div>, and updates the real DOM to match. So now the picture looks more like this:

It doesn't just update the DOM though; it remembers what it updated it to. This is how it will perform fast diffs later.

I glossed over one thing, which is how a render() function can return DOM nodes. This is obscured by the JSX which isn't plain JavaScript. It's instructive to see what this JSX compiles to:
/** @jsx React.DOM */
var HelloMessage = React.createClass({displayName: 'HelloMessage',
  render: function() {
    return React.DOM.div(null, "Hello ",;

React.renderComponent(HelloMessage( {name:"John"} ), mountNode);
Aha, so what we're returning aren't real DOM elements, but React shadow DOM equivalents (like React.DOM.div) of real DOM elements. So the React shadow DOM really has no true DOM nodes.

Representing State and Changes

So far I've left out a huge piece of the story, which is how a component is allowed to change. If a component wasn't allowed to change, then React would be nothing more than a static rendering framework, similar to a plain templating engine like Mustache or HandlebarsJS. But the entire point of React is to do updates efficiently. To do updates, components must be allowed to change.

React models its state as a state property of the component. This is illustrated in the second example on the React web page:
/** @jsx React.DOM */
var Timer = React.createClass({
  getInitialState: function() {
    return {secondsElapsed: 0};
  tick: function() {
    this.setState({secondsElapsed: this.state.secondsElapsed + 1});
  componentDidMount: function() {
    this.interval = setInterval(this.tick, 1000);
  componentWillUnmount: function() {
  render: function() {
    return (
      <div>Seconds Elapsed: {this.state.secondsElapsed}</div>

React.renderComponent(<Timer />, mountNode);
The callbacks getInitialState(), componentDidMount(), and componentWillUnmount() are all invoked by React at appropriate times, and their names should pretty clearly give away their meanings given the concepts we have explained so far.

So the basic assumptions behind a component and its state changes are:
  1. render() is only a function of the component's state and props.
  2. the state does not change except when setState() is called.
  3. the props do not change except when our parent re-renders us with different props.
(I did not explicitly mention props before, but they are the attributes passed down by a component's parent when it is rendered.)

So earlier when I said that React would call render "often enough", that means that React has no reason to call render() again until somebody calls setState() on that component, or it gets re-rendered by its parent with different props.

We can put all of this information together to illustrate the data-flow when the app initiates a virtual DOM change (for example, in response to an AJAX call):

Getting Data from the DOM

So far we have only talked about propagating changes to the real DOM. But in a real application, we'll want to get data from the DOM also, because that is how we receive all input from the user. To see how this works, we can examine the third example on the React home page:
/** @jsx React.DOM */
var TodoList = React.createClass({
  render: function() {
    var createItem = function(itemText) {
      return <li>{itemText}</li>;
    return <ul>{}</ul>;
var TodoApp = React.createClass({
  getInitialState: function() {
    return {items: [], text: ''};
  onChange: function(e) {
  handleSubmit: function(e) {
    var nextItems = this.state.items.concat([this.state.text]);
    var nextText = '';
    this.setState({items: nextItems, text: nextText});
  render: function() {
    return (
        <TodoList items={this.state.items} />
        <form onSubmit={this.handleSubmit}>
          <input onChange={this.onChange} value={this.state.text} />
          <button>{'Add #' + (this.state.items.length + 1)}</button>
React.renderComponent(<TodoApp />, mountNode);
The short answer is, you handle DOM events manually (as with the onChange() handler in this example), and your event handler can call setState() to update the UI. If your app has model classes, your event handlers will probably want to update your model appropriately and also call setState() so React also knows there were changes. If you've gotten used to frameworks that provide automatic two-way data binding, where changes to your model are automatically propagated to the view and vice versa, this may seem like a step backwards.

There is more to this example than meets the eye though. Despite how this example may look, React will not actually install an "onChange" handler on the <input> element on the real DOM. Instead it installs handlers at the document level, lets events bubble up, and then dispatches them into the appropriate element of the virtual DOM. This gives benefits such as speed (installing lots of handlers on the real DOM can be slow) and consistent behavior across browsers (even on browsers that have non-standard behavior for how events are delivered or what properties they have).

So putting all of this together, we can finally get a full picture for the data flow when a user event (ie. a mouse click) results in a DOM update:


I learned a lot about React by writing this entry. Here are my primary takeaways.

React is a view library. React doesn't impose anything about your models. A React component is a view-level concept and a component's state is just the state of that portion of the UI. You could bind any sort of model library to React (though certain ways of writing the model will make it easier to optimize updates further, as the Om post explains).

React's component abstraction is very good at pushing changes to the DOM. The component abstraction is principled, composes well, and efficient DOM updates fall out of the design.

React components are less convenient for getting updates from the DOM. Writing event handlers gives React a distinctly lower-level feel than libraries that automatically propagate view changes into the model.

React is a leaky abstraction. Most of the time you will program only to the virtual DOM, but sometimes you need to escape this and interact with the real DOM directly. The React docs talk more about this and the cases where this is necessary in their Working With the Browser section.

With my new knowledge I am inclined to more closely examine the claims made in the article The Future of JavaScript MVC Frameworks, but that is a slightly different topic that will have to wait for another entry.

I am not an expert in React, so kindly let me know of any mistakes.