Saturday, August 1, 2015

Monads Demystified

With this entry, I am going where many bloggers have gone before. Writing a monad tutorial that makes sense to you (as the writer) but nobody else seems to be a rite of passage. Silly monad tutorial are so overdone that there's even a running joke about how Monads Are Like Burritos.

So despite the fact that Monads are not a good choice as topic for your first Haskell blog entry, I'm going to be a little audacious here and make Monads the topic of my first Haskell blog entry. My previous "X Demystified" entries (React Demystified, LL and LR Parsing Demystified) have been well-received, which is allowing me to believe that I am good at explaining things. I have to apologize to Haskell people, as I'll describe some things by way of analogy with C++ and use some C++-like notation, since it's how I think. Did I mention I've written about 10 lines of Haskell in my life total? Which is fewer than the number of monad tutorials and articles I've studied, which is a lot. Anyway, here goes.

Monads are a Design Pattern

Our first task is to understand what a monad even is. Is a monad a value? Is it a type? Is it a language feature? Is it an I/O library? Most explanations either throw a mathematical construction at you or start showing you examples of monads. But neither of these is helps you classify what a monad even is.

The answer is actually pretty simple. Monads are a design pattern. In fact, it would probably be more clear to talk about monads in terms of "the Monad pattern" in the same way you'd talk about design patterns like Factory or Visitor. Any type you create that follows the monadic pattern and obeys a few key rules around that pattern is a monad.

It's really important to understand that the pattern is what makes a monad a monad. This pattern can be used for a very wide range of things. It can be used to model side-effects and I/O in Haskell, but many monads have nothing to do with side-effects or I/O. This is one of the things that makes monads confusing to learn about. Whenever you are tempted to take something you learned about a monad and generalize it, you can often find a counterexample of a monad for which that thing is not true. There is an entire web page dedicated to listing a bunch of incorrect generalizations about monads. That page explains at length what monads are not. I am going to try and explain what monads are.

The best analogy I can think of is operator overloading in C++. People overload C++ operators to do everything from matrix multiplication to string concatenation to constructing a parser. Saying "monads are about side effects" would be like saying "operator overloading is about matrix multiplication," which isn't true in general: lots of overloaded operators have nothing to do with matrix multiplication.

Monads, like overloaded operators, also have some identities which you should follow. With overloaded operators you should make sure that x = x + y has the same effect as x += y, regardless of what the + operator actually does. The monad pattern has a similar set of rules that need to be followed for things to make sense and compose correctly.

The Monad Abstraction: bind ("and then")

Let's continue the analogy with overloading += in C++. In C++, x += y means "add y to x", whatever "add" means in your context (numerical addition, matrix addition, string concatenation, etc). So what is the basic semantic of a monad?

The basic abstraction of a monad is "bind", written x >>= f. Bind means, loosely, "and then f". Basically a monad lets you keep adding functions to a chain of operations. But the monad gets to decide what "and then f" means for that monad. For this reason, monads are often called "programmable semicolons." As a monad author, you get to put your own custom logic between each step of the sequence.

(Note for the pedantic: some people would interrupt me here and argue that Monads are not about ordering/sequencing. I would respectfully disagree. Just because a monad can be implemented to be commutative doesn't take away from the fundamentally sequence-oriented nature of the abstraction. Later functions in the sequence have access to more value bindings than earlier functions. So from a pure data dependency perspective, a sequence is established in the abstraction, even if the functions do not literally need to be evaluated in sequence.)

How the Monad pattern works

We know now that monad is a pattern, and that the primary abstraction is bind ("and then"). Now we will look at how the pattern actually works.

The pattern is a bit tricky to wrap your head around, but I find it easiest to look at visually:

Starting from the top, we initially have a "monadic value." A monadic value is an instance of the monad's type, so it basically gets to contain whatever state the monad author wanted to put in it. I didn't mention how you get the first monadic value, because that can vary based on the monad and how you're using it. Right now I'm focusing on characteristics that apply to all monads.

The monad's user calls bind() and passes two arguments: the monadic value and a function that will return a monadic value. Note that bind() is a function that was defined by the monad's author and is specific to that monad. Indeed, bind() is the most interesting part of the monad! This is where the monad's author gets to decide what "and then" means for this monad. In particular, note that bind() gets to decide how to call the passed-in function. It can call it once, many times, or not at all! All of these options are valid and used by at least one real monad implementation.

bind() needs to return a monadic value. And notice that the passed-in function also returns a monadic value! So one very common option is for bind() to evaluate the passed-in function and then use its return value as the return value for bind(). If bind() doesn't evaluate the function at all, it will need to make something up for the returned monadic value. If it evaluates the function many times, it may merge their returned values into the output monadic value. There are many options for how to implement bind().

There's one other important thing I haven't mentioned yet. The passed-in function takes an argument. The bind() function gets to decide where this argument comes from and how to obtain it. This is again why a monad is called a "programmable semicolon." The monad gets to make a bunch of really interesting decisions at every stage about how to evaluate each function of the sequence, and where its input data comes from. And it gets to carry along whatever data it wants to inside the monadic value. The monad can use this to propagate whatever extra state is desired.

The type of each function's argument is a type parameter of the previous monadic value. If we annotate the previous diagram with type information, it will look more like this:

The most important monad: IO

Having established a lot of general information about monads, it seems about the right time to dive into a specific monad. And of all monads in Haskell, IO is the most important, since it is used to model all side effects. You come into contact with IO immediately when you write your first Haskell program, because Haskell's main function is defined as returning the type IO ().

When we want to understand a new monad, there are two key questions we want to ask ourselves:
  1. what does a monadic value represent in this monad?
  2. what semantic does bind() ("and then") implement in this monad?
For the IO monad, a monadic value IO a (or IO<a> in C++ notation -- a is a type parameter) represents a side-effectful action yielding a value of type a, but which has not been performed yet. And bind() implements the semantic "add this function to the action chain. If/when the function is evaluated, it will know that the operation has been performed and will receive the result (if any) of the operation. The function gets to return an IO value deciding what action should be performed next."

So in Haskell, functions which look like they perform I/O, like putStrLn, don't actually perform I/O but rather return a value representing the not-yet-performed operation.
-- putStrLn takes a string parameter and returns an unperformed I/O
-- action representing printing of that string.  The I/O action
-- yields nothing (since we are printing, not getting input) or "()".
putStrLn :: String -> IO ()
The I/O operation is not actually performed unless Haskell determines that the monadic IO value is required for something (remember that Haskell is lazy). The easiest way to do this is to return it from your main function, since Haskell will by definition perform any I/O returned from main. So the main function's type IO () means "a possibly side-effectful action that doesn't return/yield anything." And indeed your main function should perform some kind of side-effect, otherwise the program is effectively a no-op and should be optimized to nothing! Returning a useful IO () action from main is how you make your program do something useful.

That is why this program prints "Hello, world!" (since the IO action is returned from main):
main = putStrLn "Hello, world!"
But this program does nothing (because we don't return the IO action from main):
main = 
  let action = putStrLn "Hello, world!" in
    return ()
The next example also does nothing, because even though we bind the IO value to an action, the monadic value returned from bind() operation is never required, because we don't return it from main:
main =
  let action = putStrLn "Hello, world!" >>= \x -> return () in
    return ()
What does it look like to have a two stage operation, a read followed by a write? Let's use getLine which has this type:
-- getLine takes no parameters and returns an I/O action yielding a string.
getLine :: IO String
We'll need to use bind() to sequence two actions together, and to get the value of the first operation for use in the second. Our two-stage pipeline looks like this:
main =
  getLine >>= \x -> putStrLn ("You typed: " ++ x)
This example clearly illustrates using bind() (ie. >>=) to sequence two IO actions together! This is the fundamental idea of the IO monad. Monadic values represent unperformed I/O operations. And bind() will add a function to the chain, such that it will get the result of the previous I/O operation and select the next operation to be performed. But no operation in the chain will be performed unless it is ultimately part of the I/O action returned by main.

Chaining multiple operations / "do" notation

Let's take our previous example one step farther and have three IO operations in a row, where the third one wants to use the values from both of the first two:
-- Doesn't work -- "x" isn't in scope in the last function.
main =
  getLine >>=
  (\x -> getLine) >>=
  (\y -> putStrLn ("You typed: " ++ x ++ " and then " ++ y))
This example fails to compile, because bind() only gives a function direct access to the result of the directly preceding I/O operation. However, this limitation is easy to work around by nesting the functions we're binding:
main =
  getLine >>=
    \x -> getLine >>=
      \y -> putStrLn ("You typed: " ++ x ++ " and then " ++ y)
This is a little bit mind-bending, but it works because the function on the right of a bind() call has the responsibility to return a monadic value, which bind() itself does. So with this pattern, we're using the pattern in a right-associative way instead of left-associative. In other words, if you'll allow me to briefly regress into Python-like function notation, instead of saying:
bind(bind(getLine, lambda x: getLine), lambda y: putStrLn)
We're now saying:
bind(getLine, lambda x: bind(getLine, lambda y: putStrLn))
The major advantage of the second form is that later pipeline stages have access to all previous bindings (ie. both x and y), not just the directly preceding one.

As a result, the second form is, functionally speaking, pretty much strictly superior to the first. However it has a drawback from a convenience perspective, which is that each stage creates an extra level of function nesting. If you indent this properly, functions with many stages will start flowing off the right edge of your screen. Because this pattern is so popular, Haskell includes a convenience notation for this function nesting:
main = do
  x <- getLine
  y <- getLine
  putStrLn ("You typed: " ++ x ++ " and then " ++ y)
Just like our manually written-out version, both the x and y bindings are available in later stages of the sequence.

To properly represent this new variation, we should update our previous diagram. The diagram gets a little more crazy with all the nesting, but the key point to notice is that since the second function is nested inside the first, it has access to the first function's argument / variable binding.

The Maybe monad and "return"

Let's look at a different monad to diversify our understanding a bit. Maybe is probably the simplest monad that is actually useful, but is also different from IO in some important ways. Maybe doesn't have anything to do with side-effects: it is completely pure.

I said before that the two important questions to ask when encountering a new monad are: (1) what does a monadic value represent, and (2) what semantic does bind() implement?

For Maybe the answers to these questions are extremely simple. A monadic value Maybe a represents either a value of type a or Nothing, the lack of any value:
data Maybe t = Just t | Nothing
And bind() implements the semantic: "if there is a value, evaluate the function (with that value), otherwise don't evaluate the function at all."

So when you chain a bunch of functions together with the Maybe monad, it will evaluate all of them in sequence until one of them returns Nothing. If any function in the sequence returns Nothing then the subsequent functions won't be evaluated at all and the chain will return Nothing.

But how do you get the whole thing started? With the IO monad, we were obtaining IO values from I/O library functions like getLine. In the case of Maybe though, we want to provide our own value to start the process. For this we could construct such a value by using the Just constructor directly:
getMonadicValue x = Just x
But as a general rule, monads are required to provide a standard function called return that does just this. The job of return (in any monad) is to construct a monadic value out of a plain value. The implementation details of a monadic value are not public, but the return function is a public interface for building a trivial monadic value that simply wraps the given value.

It's a little weird that the function is called return when you can use it to start a monadic pipeline. But it makes more sense when you realize that the function is also useful for returning a monadic value from a function, for example:
add :: Maybe Int -> Maybe Int -> Maybe Int
add mx my = do
  x <- mx
  y <- my
  return (x + y)
The monadic laws (which I won't get into in this article) guarantee that return doesn't do anything too fancy, but just wraps the value and yields it unchanged to the next function in the sequence.


There are many more monads than these two to explore, but this article has gotten quite long already. I hope you have found this article useful in your understanding of monads. If you can take one thing away from this article, remember the two questions to ask yourself when encountering a new monad: (1) what does a monadic value represent, and (2) what semantic does bind() implement?

Wednesday, May 20, 2015

Status update on upb and Ruby Protocol Buffers

It has been a long time since I have posted any updates about upb, my implementation of Protocol Buffers. In 2011 I posted the article upb status and preliminary performance numbers, showing that upb's JIT compiler yielded impressive performance results parsing Protocol Buffers, particularly when used as a stream parser. Since then there has been a lot of activity in GitHub, but still no releases, and I haven't posted much about upb or where it's going. It's time for a status update

On a personal note, I also have exciting news to report: as of last August, I have joined the Protocol Buffers team at Google officially. After five years of being very interested in Protocol Buffers and working on upb as my 20% project, I now work on Protocol Buffers full time (and still work on upb 20%).

As has always been the case, on this blog I speak only for myself: not for my team, or for Google!

But onto upb status updates. A lot has happened since I last wrote. There have been hundreds of commits, and lots of changes, big and small. Obviously I won't cover everything, but I'll start with big-picture news, then get a little more specific, then talk vision for a bit.

haberman/upb is now google/upb

Up until this week, upb's main repository lived on GitHub under my own username. As of two days ago, its home is under the Google organization. As a 20% project, upb has always been under Google copyright. Living under the Google organization will make it easier for Google to ensure that it follows policies surrounding CLA (contributor license agreements) and such.

This also makes upb a bit more "official" as an open-source offering of Google. This will enable Google to incorporate upb into other projects/products, which brings me to my next bit of news.

Google is using upb for the MRI Ruby Protocol Buffers implementation

Though the code has been public on GitHub for months now, I'm very happy to formally announce that the official Google-supported Ruby library for protobufs (on MRI) uses upb for parsing and serialization! The Ruby-specific code was written by my excellent teammate Chris Fallin. The data representation (ie. the Ruby classes representing protobufs) is all Ruby-specific, but under the hood it uses upb for all parsing and serialization.

One of upb's goals from the beginning was to be an ideal implementation for dynamic languages like Ruby, Python, PHP, etc, so it is extremely gratifying to finally see this in action.

When we were evaluating how to implement the Ruby Protocol Buffers library, one of major questions was whether to implement it in pure Ruby or to use a C extension. Staying pure Ruby has notable portability benefits, but at the end of the day the performance benefit of a C extension approach was too great to pass up. The micro-benchmark numbers of our extension vs. a couple of pure-Ruby implementations make this clear:
Test payload is a 45-byte protobuf

Ruby Beefcake    (pure-Ruby):                0.7MB/s
Ruby Protobuf    (pure-Ruby):                0.8MB/s
Google Protobuf  (C accelerated with upb):  11.9MB/s
Representative microbenchmarking is notoriously difficult, but the point of this is just to show that C acceleration when it comes to parsing makes a big difference. That is a big part of why I created upb to begin with.

For a lot of use cases, the difference doesn't matter. Lots of people who make remote API calls just need to parse a modestly-sized payload, and it's not in a tight loop or anything. These people probably won't care about the perf difference between Beefcake and Google Protobuf.

But there always end up being people who care. When the payload gets big enough, or the loop gets tight enough, a 10x difference is huge. We see this at Google all the time. And just witness the amount of effort that people have put into optimizing JSON parsing -- for example, the oj (Optimized JSON) library for Ruby. We wanted to be sure that the protobuf library would be at least speed-competitive with the (pretty fast) built-in JSON library that comes with Ruby. And in that we have succeeded -- in my one silly test, we are roughly speed-competitive with the JSON, (a little faster or a little slower, depending on how you measure).

This doesn't help JRuby users unfortunately, but I am very happy that open-source contributors have come forward to implement an API-compatible version of the library for JRuby by wrapping the Protocol Buffers Java implementation. So both MRI and JRuby can get fast parsers, and code should be portable between the two. The cost is having to maintain two implementations under the hood.

By the way, the Ruby Protocol Buffers library uses upb, but it doesn't use upb's JIT at all for parsing. The numbers above reflect upb's pure-C, bytecode-based decoder. There is no point in enabling the JIT, because the actual parsing represents such a small portion of the overall CPU cost of parsing from Ruby -- upb by itself can parse at more than 10x the speed of the Ruby extension, even with the pure C parser. The vast majority of the time is spent in the Ruby interpreter creating/destroying objects, and other Ruby-imposed overhead. Enabling the JIT would barely even be a measurable difference. This is nice because avoiding the JIT here means we avoid the portability constraints of the JIT.

JSON support

The Ruby implementation of Protocol Buffers is part of a larger effort known as "proto3" (see here and here). I won't go too deeply into what proto3 is about because that would be its own blog entry. But one important element of the proto3 story is that JSON support is becoming first-class. Protobuf binary format and JSON format are both options for sending payloads. If you want a human-readable payload, send JSON -- proto3 libraries will support both.

Since Ruby-Protobuf uses upb for all of its parsing/serialization, that must mean that upb now supports JSON, and indeed it does!

Everything in upb is stream-based; this means that upb supports JSON<->protobuf transcodes in a totally streaming fashion. Internally the protobuf encoder has to buffer submessages, since Protocol Buffer binary format specifies that submessages on the wire are prefixed by their length. But from an API perspective, it is all streaming. (It would be possible to write a version of the encoder that avoids the internal buffering by doing two passes, but that is an exercise for a different day).

Protocol Buffers and JSON, next the world?

I had an epiphany at some point, which was that upb could be a more generalized parsing framework, not limited to just Protocol Buffers.

upb was inspired by SAX, the Simple API for XML. The basic idea of SAX is that you have a parser that calls callbacks instead of building a data structure directly. This model makes for fast and flexible parsers, and has been imitated by other parsers such as YAJL (for JSON).

So upb supports parsing protobufs by letting you register handler functions for every protobuf field. Parsing field A calls handler A, parsing field B calls handler B, and so on.

But what if we could apply this basic idea to the use case of SAX? What if upb could parse XML, and call handlers that are specific to the XML data model? We could use .proto files to define an XML-specific schema. For SAX this might look like:
// Protocol buffer schema representing SAX
message StartElement {
  string element_name = 1;
  map<string, string> attributes = 2;

message EndElement {
  string element_name = 1;

message ProcessingInstruction {
  string target = 1;
  string data = 2;

message XMLContent {
  oneof content {
    StartElement start = 1;
    EndElement end = 2;
    bytes characters = 3;
    bytes ignorable_whitespace = 4;
    string skipped_entity = 5;
    ProcessingInstruction processing_instruction = 6;

message XMLDocument {
  repeated XMLContent content = 1;
Now we can offer basically the same API as SAX, except the set of handlers is defined as a protobuf schema, instead of being hard-coded into the library.

What if we try the same idea with JSON?
message JsonArray {
  repeated JsonValue value = 1;

message JsonObject {
  map<string, JsonValue> properties = 1;

message JsonValue {
  oneof value {
    bool is_null = 1;
    bool boolean_value = 2;
    string string_value = 3;

    // Multiple options for numbers, since JSON doesn't specify
    // range/precision for numbers.
    double double_value = 4;
    int64  int64_value = 5;
    string number_value = 6;

    JsonObject object_value = 7;
    JsonArray array_value = 8;
Now we can offer the same API as YAJL.

Or take HTTP, according to the handlers specified in http-parser
message HTTPHead {
  uint32 http_major = 1;
  uint32 http_minor = 2;
  uint32 status_code = 3;
  enum Method {
    DELETE = 1;
    GET = 2;
    HEAD = 3;
    PUT = 4;
    POST = 5;
    // ...

  Method method = 4;
  string url = 5;

  map<string, string> headers = 6;

message HTTPBody {
  repeated bytes chunk = 1;

message HTTP {
  HTTPHead head = 1;
  HTTPBody body = 2;
Do you see where this is going? SAX, YAJL, and http-parser are great APIs. upb seeks to be a generalization of streaming parser APIs, by using a Protocol Buffer schema to define the set of handlers for each format

You may be tempted to say "bloat." But extending upb in this way doesn't make the core any bigger than if it just supported Protocol Buffers. The core library already contains all the functionality for registering handlers to match a protobuf schema. We aren't growing the core library at all to enable these other use cases.

And coming from the other direction, the core library of upb is only about 25kb of object code. It just contains data structures for representing schemas and handlers. For many libraries, like libxml or expat, 25kb is not a very noticeable overhead. For some exceptionally small libraries like YAJL or http-parser, 25kb would be a noticeable overhead, but in most cases the overhead is acceptable.

And what does this abstraction buy you? An easy way to expose your parser to multiple languages.

Take Ruby. Since the Google Protocol Buffers library for Ruby is implemented using upb, all of the handlers already exist to read and write Protocol Buffers objects. So if you had an HTTP parser that used upb handlers according to the handler schema above, it would be trivial to parse some HTTP into an HTTP Protocol Buffers object in Ruby. You can use the Protocol Buffer object as your parse tree / in-memory representation! Once you do that you can easily serialize it as a binary Protocol Buffer, or as JSON, if you want to preserve its parsed form. You could easily write a filter program in Ruby that iterates over a bunch of HTTP sessions and pulls out the URL. You have all these capabilities for how you can use the parser in Ruby, but you've had to do barely any Ruby-specific work to expose the parser to Ruby.

If you keep going down this path, you realize that this same idea could work for streaming string->string transformations (like gzip or SSL), and even for streaming aggregations like HyperLogLog. And the ultimate dream is that this all can compose nicely.

This is the vision which I am constantly working towards. But I'm also grounded by the need to deliver tangible things like something that Ruby and other language implementations can use right now.

Back to Planet Earth: Concrete Status

All of those lofty ideas are nice, but the devil is in the details. Here is the status of some of those details, in super-brief form:
  1. protobuf encoder and decoder are fully complete (except for vending unknown fields, which I'd like to add at some point)
  2. protobuf decoder (both pure-C and JIT) are completely rewritten. Pure-C decoder is now bytecode-based, and JIT is a code generator from that bytecode. Both decoders are fully-resumable, for when the input spans multiple buffers.
  3. typed JSON parser and printer are pretty complete, following the JSON encoding conventions of proto3. "Typed JSON" means JSON that follows the structure of a protobuf schema.
  4. upb now has a stable ABI; structure members and sizes are hidden in implementation files.
  5. upb now supports fully-injectable memory allocation and error callbacks for parse-time structures. (Sharable, ahead-of-time structures like schemas and handlers still rely on a global malloc()).
  6. there is now an "amalgamation" build where all upb sources are collapsed into a single .h/.c file pair, to make it easy to drop into another project's build (the Ruby extension uses this
But there is still a lot to do:
  1. need to split generic JSON encoder from typed JSON encoder. (Generic JSON encoder will use a schema like the above to represent arbitrary JSON, like YAJL, instead of being limited to a structured schema).
  2. need to port to c89. This is a tough one for me. The title of this blog is "parsing, performance, minimalism with c99." But practical experience is just making it clear that we're still in a c89 world if you want maximum portability (and particularly the ability to integrate into other build systems).
  3. need to tweak the handlers model to be even more composable. For example, it should be possible for the JSON encoder/decoder to internally use a generic upb base64 encoder/decoder to put binary data into string fields. It should also be easier to compose pipeline elements. We're not quite to that level of composability yet.
  4. need to implement a parser for .proto files, so we can parse them directly. Right now we can only load binary descriptors.
  5. Expand to more formats! XML, HTTP, CSV, LevelDB log format, YAML, and more!
  6. My most lofty vision, resurrect Gazelle, my parser generator, as an easy way of generating fast parsers that target upb natively. Think yacc/Bison/ANTLR, but more convenient, because you'll be able to easily use Protocol Buffer in-memory objects as your parse tree / AST.
So there is still a lot to do, but it's exciting to see it really working for Ruby already.

The name

As you can see from what I described above, upb is outgrowing its name, since it's no longer just about Protocol Buffers. I've been playing with calling it "Unleaded" instead, as a backronym for upb (get it?), but I can't seem to get this to stick.

Wednesday, October 29, 2014

The Speed of Python, Ruby, and Lua's Parsers

I've run into some cases lately where loading a lot of generated code in Python takes a lot of time. When I profiled my test case the hotspot seemed to be the parser itself -- that is, Python's parser that parses .py source files.

I realized that I had no idea how fast the parsers for some of these languages are. As someone who is interested in parsers, this piqued my interest. I was particularly interested to see how much precompiling would help.

To satisfy my curiosity, I wrote a quick little benchmark that tries to get some rough numbers on this.

These are the results I got on my machine:
python               real 0m1.521s user 0m1.184s sys 0m0.328s
ruby                 real 0m0.523s user 0m0.441s sys 0m0.076s
lua                  real 0m0.131s user 0m0.124s sys 0m0.005s
python (precompiled) real 0m0.022s user 0m0.012s sys 0m0.009s
lua (precompiled)    real 0m0.005s user 0m0.002s sys 0m0.003s

Version Information:
Python 2.7.5
ruby 2.1.3p242 (2014-09-19 revision 47630) [x86_64-darwin13.0]
Lua 5.2.3  Copyright (C) 1994-2013, PUC-Rio
My takeaways from this are:
  1. There is a surprising amount of variation here. Python's parser probably has a lot of room for optimization. But the fact that precompiling is available probably reduces the demand for this considerably.
  2. Precompiling helps a lot.
  3. Lua continues to impress.

Friday, October 17, 2014

The overhead of abstraction in C/C++ vs. Python/Ruby

I've been working on some Python and Ruby libraries lately that wrap C extensions. An interesting and important observation came to me as I was doing some of the design.

Please note this is not meant to be any kind of commentary of the relative value between these languages. It's a specific observation that is useful when you are crossing the language barrier and deciding the boundary between what should go in the high-level language vs what should go in C or C++.

The observation I made is that C and C++ compilers can inline, whereas interpreters for Python and Ruby generally do not.

This may seem like a mundane observation, but what it means is that building abstractions in the high-level language has a noticeable cost, where in C and C++ simple abstractions built around function calls are basically free.

To illustrate, take this Python program:
total = 0

for i in range(1000000):
  total += i

print total
Now suppose we want to abstract this a bit (this is a toy example, but mirrors the structure of real abstractions):
total = 0

class Adder:
  def __init__(self): = 0

  def add(self, i): += i

adder = Adder()

for i in range(1000000):
On my machine, the second example is less than half the speed of the first. (The same is true of Ruby when I tried equivalent programs).
$ time python 

real 0m0.158s
user 0m0.133s
sys     0m0.023s
$ time python 

real 0m0.396s
user 0m0.367s
sys     0m0.024s
Compare this with the equivalent first program in C++ (I used "volatile" to prevent the compiler from being too smart and collapsing the loop completely):
#include <stdio.h>

int main() {
  volatile long total = 0;

  for (long i = 0; i < 100000000; i++) {
    total += i;

  printf("%ld\n", total);
And the version with the adder abstracted into a class:
#include <stdio.h>
class Adder {
  Adder() : total(0) {}
  void add(long i) { total += i; }
  volatile long total;

int main() {
  Adder adder;

  for (long i = 0; i < 100000000; i++) {

On my machine, not only do they take the same amount of time, they compile into literally exactly the same machine code.

We already know that Python and Ruby are noticeably slower than C and C++ (again, not a dig, the two serve different purposes), which suggests that performance-critical code should go in C or C++. But the extra observation here is that any layers or abstractions in Python or Ruby have an inherent cost, whereas in C or C++ you can layer abstractions much more freely without fear of additional overhead, particularly for functions or classes in a single source file.

Monday, September 15, 2014

What every computer programmer should know about floating point, part 1

The subject of floating-point numbers can strike vague uncertainty into all but the hardiest of programmers. The first time a programmer gets bitten by the fact that 0.1 + 0.2 is not quite equal to 0.3, the whole thing can seem like an inscrutable mess where nothing behaves like it should.

But lying amidst all of this seeming insanity are a lot of things that make perfect sense if you think about them in the right way. There is an existing article called What Every Computer Scientist Should Know About Floating-Point Arithmetic, but it is very math-heavy and focuses on subtle issues that face data scientists and CPU designers. This article ("What every computer programmer should know...) is aimed at the general population of programmers. I'm focusing on simple and practical results that you can use to build your intuition for how to think about floating-point numbers.

As a practical guide I'm concerning myself only with the IEEE 754 floating point formats single (float) and double that are implemented on current CPUs and that most programmers will come into contact with, and not other topics like decimal floating point, arbitrary precision, etc. Also my goal is to build intuition and show the shapes of things, not prove theorems, so my math may not be fully precise all the time. That said, I don't want to be misleading, so please let me know of any material errors!

Articles like this one are often written in a style that is designed to make you question everything you thought you knew about the subject, but I want to do the opposite: I want to give you confidence that floating-point numbers actually make sense. So to kick things off, I'm going to start with some good news.

Integers are exact! As long as they're not too big.

It's true that 0.1 + 0.2 != 0.3. But this lack of exactness does not apply to integer values! As long as they are small enough, floating point numbers can represent integers exactly.
1.0 == integer(1) (exactly)
5.0 == integer(5) (exactly)
2.0 == integer(2) (exactly)
This exactness also extends to operations over integer values:
1.0 + 2.0 == 3.0 (exactly)
5.0 - 1.0 == 4.0 (exactly)
2.0 * 3.0 == 6.0 (exactly)
Mathematical operations like these will give you exact results as long as all of the values are integers smaller than \(2^{53}\) (for double) or \(2^{24}\) (for float).

So if you're in a language like JavaScript that has no integer types (all numbers are double-precision floating point), and you have an application that wants to do precise integer arithmetic, you can treat JS numbers as 53-bit integers, and everything will be perfectly exact. Though of course if you do something inherently non-integral, like 8.0 / 7.0, this exactness guarantee doesn't apply.

And what if you exceed \(2^{53}\) for a double, or \(2^{24}\) for a float? Will that give you strange dreaded numbers like 16777220.99999999 when you really wanted 16777221?

No — again for integers the news is much less dire. Between \(2^{24}\) and \(2^{25}\) a float can exactly represent half of the integers: specifically the even integers. So any mathematical operation that would have resulted in an odd number in this range will instead be rounded to one of the even numbers around it. But the result will still be an integer.

For example, let's add:
    16,777,216 (2^24)
  +          5
    16,777,221 (exact result)
    16,777,220 (rounded to nearest representable float)
You can generally think of floating point operations this way. It's as if they computed exactly the correct answer with infinite precision, but then rounded the result to the nearest representable value. It's not implemented this way of course (putting infinite precision arithmetic in silicon would be expensive), but the results are generally the same as if it had.

We can also represent this concept visually, using a number line:

The green line represents the addition and the red line represents the rounding to the nearest representable value. The tick marks above the number line indicate which numbers are representable and which are not; because these values are in the range \([2^{24}, 2^{25}]\), only the even numbers are representable as float.

This model can also explain why adding two numbers that differ wildly in magnitude can make the smaller one get lost completely:
  +          0.0001
    16,777,216.0001 (exact result)
    16,777.216      (rounded to nearest representable float)
Or in the number line model:

The smaller number was not nearly big enough to get close to the next largest representable value (16777218), so the rounding caused the smaller value to get lost completely.

This rounding behavior also explains the answer to question number 4 in Ridiculous Fish's excellent article Will It Optimize? It's tempting to have floating-point anxiety and think that transforming (float)x * 2.0f into (float)x + (float)x must be imprecise somehow, but in fact it's perfectly safe. The same rule applies as our previous examples: compute the exact result with infinite precision and then round to the nearest representable number. Since the x + x and x * 2 are mathematically exactly the same, they will also get rounded to exactly the same value.

So far we've discovered that a float can represent:
  • all integers \([0, 2^{24}]\) exactly
  • half of integers \([2^{24}, 2^{25}]\) exactly (the even ones)

Why is this? Why do things change at \(2^{24}\)?

It turns out that this is part of a bigger pattern, which is that floating-point numbers are more precise the closer they are to zero. We can visualize this pattern again with a number line. This illustration isn't a real floating-point format (it has only two bits of precision, much less than float or double) but it follows the same pattern as real floating-point formats:

This diagram gets to the essence of the relationship between floating point values and integers. Up to a certain point (4 in this case), there are multiple floating point values per integer, representing numbers between the integers. Then at a certain point (here between 4 and 8) the set of floating point and integer values are the same. Once you get larger than that, the floating point values skip some integer values.

We can diagram this relationship to get a better sense and intuition for what numbers floats can represent compared to integers:

This plot is just a continuation of what we've said already. The green dots are boring and only appear for reference: they are saying that no matter how large or small your values are for an integer representation like int32, they can represent exactly one value per integer. That's a complicated way of saying that integer representations exactly represent the integers.

But where it gets interesting is when we compare integers to floats, which appear as red dots. The green and red dots intersect at \(2^{24}\); we've already identified this as the largest value for which floats can represent every integer. If we go larger than this, to \(2^{25}\), then floats can represent half of all integers, (\(2^{-1}\) on the graph), which again is what we have said already.

The graph shows that the trend continues in both directions. For values in the range \([2^{25}, 2^{26}]\), floats can represent 1/4 of all integers (the ones divisible by 4). And if we go smaller, in the range \([2^{23}, 2^{24}]\), floats can represent 2 values per integer. This means that in addition to the integers themselves, a float can represent one value in between each integer, that being \(x.5\) for any integer \(x\).

So the closer you get to zero, the more values a float can stuff between consecutive integers. If you extrapolate this all the way to 1, we see that float can represent \(2^{23}\) unique values between 1 and 2. (Between 0 and 1 the story is more complicated).

Range and Precision

I want to revisit this diagram from before, which depicts a floating-point representation with two bits of precision:

A useful observation in this diagram is that there are always 4 floating-point values between consecutive powers of two. For each increasing power of two, the number of integers doubles but the number of floating-point values is constant.

This is also true for float (\(2^{23}\) values per power of two) and double (\(2^{52}\) values per power of two). For any two powers-of-two that are in range, there will always be a constant number of values in between them.

This gets to the heart of how range and precision work for floating-point values. The concepts of range and precision can be applied to any numeric type; comparing and contrasting how integers and floating-point values differ with respect to range and precision will give us a deep intuition for how floating-point works.

Range/precision for integers and fixed-point numbers

For an integer format, the range and precision are straightforward. Given an integer format with \(n\) bits:
  • every value is precise to the nearest integer, regardless of the magnitude of the value.
  • range is always \(2^{n}\) between the highest and lowest value (for unsigned types the lowest value is 0 and for signed types the lowest value is \(-(2^{n-1})\)).
If we depict this visually, it looks something like:

If you ever come across fixed point math, for example the fixed-point support in the Allegro game programming library, fixed point has a similar range/precision analysis as integers. Fixed-point is a numerical representation similar to integers, except that each value is multiplied by a constant scaling factor to get its true value. For example, for a 1/16 scaling factor:

integersequivalent fixed point value
11 * 1/16 = 0.0625
22 * 1/16 = 0.125
33 * 1/16 = 0.1875
44 * 1/16 = 0.25
1616 * 1/16 = 1

Like integers, fixed point values have a constant precision regardless of magnitude. But instead of a constant precision of 1, the precision is based on the scaling factor. Here is a visual depiction of a 32-bit fixed point value that uses a 1/16 (\(1/2^{4}\)) scaling factor. Compared with a 32-bit integer, it has 16x the precision, but only 1/16 the range:

The fixed-point scaling factor is usually a fractional power of two in (ie. \(1/2^{n}\) for some \(n\)), since this makes it possible to use simple bit shifts for conversion. In this case we can say that \(n\) bits of the value are dedicated to the fraction.

The more bits you spend on the integer part, the greater the range. The more bits you spend on the fractional part, the greater the precision. We can graph this relationship: given a scaling factor, what is the resulting range and precision?

Looking at the first value on the left, for scaling factor \(2^{-16}\) (ie. dedicating 16 bits to the fraction), we get a precision of \(2^{16}\) values per integer, but a range of only \(2^{16}\). Increasing the scaling factor increases the range but decreases the precision.

At scaling factor \(2^{0} = 1\) where the two lines meet, the precision is 1 value per integer and the range is \(2^{32}\) — this is exactly the same as a regular 32-bit integer. In this way, you can think of regular integer types as a generalization of fixed point. And we can even use positive scaling factors: for example with a scaling factor of 2, we can double the range but can only represent half the integers in that range (the even integers).

The key takeaway from our analysis of integers and fixed point is that we can trade off range and precision, but given a scaling factor the precision is always constant, regardless of how big or small the values are.

Range/precision for floating-point numbers

Like fixed-point, floating-point representations let you trade-off range and precision. But unlike fixed point or integers, the precision is proportional to the size of the value.

Floating-point numbers divide the representation into the exponent and the significand (the latter is also called the mantissa or coefficient). The number of bits dedicated to the exponent dictates the range, and the number of bits dedicated to the significand determines the precision.

We will discuss the precise meanings of the exponent and significand in the next installment, but for now we will just discuss the general patterns of range and precision.

Range works a little bit differently in floating-point than in fixed point or integers. Have you ever noticed that FLT_MIN and DBL_MIN in C are not negative numbers like INT_MIN and LONG_MIN? Instead they are very small positive numbers:
#define FLT_MIN     1.17549435E-38F
#define DBL_MIN     2.2250738585072014E-308
Why is this?

The answer is that floating point numbers, because they are based on exponents, can never actually reach zero or negative numbers "natively". Every time you decrease the exponent you get closer to zero but you can never actually reach it. So the smallest number you can reach is FLT_MIN for float and DBL_MIN for double. (denormalized numbers can go smaller, but they are considered special-case and are not always enabled. FLT_MIN and DBL_MIN are the smallest normalized numbers.)

You may protest that float and double can clearly represent zero and negative numbers, and this is true, but only because they are special-cased. There is a sign bit that indicates a negative number when set.

And when the exponent and significand are both zero, this is special-cased to be the value zero. (If the exponent is zero but the significand is non-zero, this is a denormalized number; a special topic for another day.)

Put these two special cases together and you can see why positive zero and negative zero are two distinct values (though they compare equal).

Because floating-point numbers are based on exponents, and can never truly reach zero, the range is defined not as an absolute number, but as a ratio between the largest and smallest representable value. That range ratio is entirely determined by the number of bits alloted to the exponent.

If there are \(n\) bits in the exponent, the ratio of the largest to the smallest value is roughly \(2^{2^{n}}\). Because the \(n\)-bit number can represent \(2^{n}\) distinct values, and since those values are themselves exponents we raise 2 to that value.

We can use this formula to determine that float has a range ratio of roughly \(2^{256}\), and double has a range ratio of roughly \(2^{2048}\). (In practice the ranges are not quite this big, because IEEE floating point reserves a few exponents for zero and NaN).

This alone doesn't say what the largest and smallest values actually are, because the format designer gets to choose what the smallest value is. If FLT_MIN had been chosen as \(2^0\ = 1\), then the largest representable value would be \(2^{256} \approx 10^{77}\).

But instead FLT_MIN was chosen as \(2^{-126} \approx 10^{-37}\), and FLT_MAX is \(\approx 2^{128} \approx 3.4 \times 10^{38}\). This gives a true range ratio of \(\approx 2^{254}\), which roughly lines up with our previous analysis that yielded \(2^{256}\) (reality is a bit smaller because two exponents are stolen for special cases: zero and NaN/infinity).

What about precision? We have said several times that the precision of a floating-point value is proportional to its magnitude. So instead of saying that the number is precise to the nearest integer (like we do for integer formats), we say that a floating-point value is precise to \(X\%\) of its value. Using our sample from before of an imaginary floating point format with a two-bit significand, we can see:

So at the low end of each power of two, the precision is always 25% of the value. And at the high end it looks more like:

So for a two-bit significand, the precision is always between 12.5% and 25% of the value. We can generalize this and say that for an \(n\)-bit significand, the precision is between \(1/2^{n}\) and \(1/(2^{n+1})\) of the value (ie. between \(\frac{100}{2^{n}}\%\) and \(\frac{100}{2^{n+1}}\%\) of the value. But since \(1/2^{n}\) is the worst case, we'll talk about that because that's the figure you can count on.

We have finally explored enough to be able to fully compare/contrast fixed-point and integer values with floating point!

fixed point
scalar (high - low)
\(2^{n} \times \text{scaling factor}\)
equal to the scaling factor
floating point
ratio (high / low)
relative (X%)
\(\frac{100}{2^{n}} \%\) (worst case)

If we apply these formulas to single-precision floating point vs. 32-bit unsigned integers, we get:

floating point
\(2^{256} / 1\)
0.00001% (worst case)

Practical trade-offs between fixed/floating point

Let's step back for a second and contemplate what all this really means, for us humans here in real life as opposed to abstract-math-land.

Say you're representing lengths in kilometers. If you choose a 32-bit integer, the shortest length you can measure is 1 kilometer, and the longest length you can measure is 4,294,967,296 km (measured from the Sun this is somewhere between Neptune and Pluto).

On the other hand, if you choose a single-precision float, the shortest length you can measure is \(10^{-26}\) nanometers — a length so small that a single atom's radius is \(10^{24}\) times greater. And the longest length you can measure is \(10^{25}\) light years.

The float's range is almost unimaginably wider than the int32. And what's more, the float is also more accurate until we reach the magic inflection point of \(2^{24}\) that we have mentioned several times in this article.

So if you choose int32 over float, you are giving up an unimaginable amount of range, and precision in the range \([0, 2^{24}]\), all to get better precision in the range \([2^{24}, 2^{32}]\). In other words, the int32's sole benefit is that it lets you talk about distances greater than 16 million km to kilometer precision. But how many instruments are even that accurate?

So why does anyone use fixed point or integer representations?

To turn things around, think about time_t. time_t is a type defined to represent the number of seconds since the epoch of 1970-01-01 00:00 UTC. It has traditionally been defined as a 32-bit signed integer (which means that it will overflow in the year 2038). Imagine that a 32-bit single-precision float had been chosen instead.

With a float time_t, there would be no overflow until the year 5395141535403007094485264579465 AD, long after the Sun has swallowed up the Earth as a Red Giant, and turned into a Black Dwarf. However! With this scheme the granularity of timekeeping would get worse and worse the farther we got from 1970. Unlike the int32 which gives second granularity all the way until 2038, with a float time_t we would already in 2014 be down to a precision of 128 seconds — far too coarse to be useful.

So clearly floating point and fixed point / integers all have a place. Integers are still ideal for when you are counting things, like iterations of a loop, or for situations like a time counter where you really do want a constant precision over its range. Integer results can also be more predictable since the precision doesn't vary based on magnitude. For example, integers will always hold the identity x + 1 - 1 == x, as long as x doesn't overflow. The same can't be said for floating point.


There is more still to cover, but this article has grown too long already. I hope this has helped build your intuition for how floating point numbers work. In the next article(s) in the series, we'll cover: the precise way in which the value is calculated from exponent and significand, fractional floating point numbers, and the subtleties of printing floating-point numbers.

Sunday, June 22, 2014

Beware of Lua finalizers in C modules!

If you write Lua C modules, there is a corner case surrounding finalizers (ie. __gc metamethods) that you should be aware of. I haven't run into a public and prominent warning about this, so I decided to write it up in this entry. I only discovered it myself through trial and error.

Lua provides the __gc metamethod for releasing resources associated with a userdata. It is commonly used in C modules to free any C-level objects or resources that were allocated by a type.
#include "lauxlib.h"
#include "mytype.h"

#if LUA_VERSION_NUM == 501
#define setfuncs(L, l) luaL_register(L, NULL, l)
#define luaL_setmetatable(L, name) \
  luaL_getmetatable(L, name); \
  lua_setmetatable(L, -2)
#elif LUA_VERSION_NUM == 502
#define setfuncs(L, l) luaL_setfuncs(L, l, 0)

static const char *MYTYPE = "mytype";

typedef struct {
  mytype_t *val;
} MyTypeWrapper;

static int newobj(lua_State *L) {
  MyTypeWrapper *obj = lua_newuserdata(L, sizeof(MyTypeWrapper));
  obj->val = mytype_new();
  luaL_setmetatable(L, MYTYPE);
  return 1;

static int call(lua_State *L) {
  MyTypeWrapper *obj = lual_checkudata(L, 1, MYTYPE);
  return 0;

static int gc(lua_State *L) {
  MyTypeWrapper *obj = lual_checkudata(L, 1, MYTYPE);
  return 0;

static const struct luaL_Reg mm[] = {
  {"__gc", gc},
  {"__call", call},

int luaopen_ext(lua_State *L) {
  luaL_newmetatable(L, mytype);
  setfuncs(L, mm);
  lua_pushcfunction(L, &newobj);
  return 1;
It turns out this will work nearly all of the time. But there is one very unusual corner case that can reliably cause gc() to run before call()! This program can trigger this behavior on both Lua 5.1 and Lua 5.2 (and LuaJIT):
local ext = require "ext"

if _VERSION >= 'Lua 5.2' then
  function defer(fn)
    setmetatable({}, { __gc = fn })
  function defer(fn)
    getmetatable(newproxy(true)).__gc = fn

local y = {}
defer(function() y[1]() end)
y[1] = ext()
Basically, any Lua code that runs inside a __gc metamethod can get access to a userdata that has already been finalized! This can crash your C extension if you don't handle this case.

There are two main solutions to this problem:
  1. Clear the userdata's metatable inside the finalizer. This will ensure that the userdata fails any subsequent luaL_checkudata() check later. The downside is that the error message for trying to call a method on a finalized value will be very unhelpful (something like "attempt to index field '?' (a user data value)")
  2. Set a "finalized" flag inside the finalizer, and check this flag right after calling luaL_checkudata(). For example, you could set the pointer to NULL and check for this. This gives you the benefit of being able to return a custom error message like "attempted to call into dead object."
Here is an example of what the second solution might look like:
mytype *mytype_check(lua_State *L, int narg) {
  mytype *obj = luaL_checkudata(L, narg, MYTYPE);
  if (!obj->val) luaL_error(L, "called into dead object");
  return obj;

static int call(lua_State *L) {
  MyTypeWrapper *obj = mytype_check(L, 1);
  return 0;

static int gc(lua_State *L) {
  mytype *obj = mytype_check(L, 1);
  // The critical step that will prevent us from allowing
  // a call into a dead object.
  obj->val = NULL;
  return 0;
For a bit more discussion about this, see this lua-l thread where I raised the issue.

The moral of the story is: anytime you are using __gc from a C module, you need to handle the case where the finalizer gets called before other methods. Otherwise a user could SEGV your module.

Saturday, February 22, 2014

On "The Future of JavaScript MVC Frameworks"

This entry is a sort of "part 2" to my previous entry React Demystified. Now that I understand React better, I want to take a closer look at the blog post that motivated this quest to begin with, The Future of JavaScript MVC Frameworks. That entry advocates a vision for what MVC's of the future will look like in JavaScript, and is accompanied by some strong benchmark numbers.

The article's overall argument is that the design/architecture of most JavaScript MVC libraries today makes them slow. While they can be optimized to improve performance, a much better approach (according to the article) is to change the design of the MVC library to something that is inherently faster; a design where the optimizations fall out of the design "for free". A design that is fast by default. This is the claim that I want to take a deeper look at and evaluate in detail.

I want to go deeper because the form of the argument -- the idea that a fundamentally different design can render a lot of busy-work obsolete -- is one that resonates with me. But there are a lot of aspects of this that are all mashed together a bit in the article, so I want to break them apart and look at them one by one.

React/Om vs. Backbone.js benchmarks

The article begins with some benchmarks from TodoMVC, a very cool web site that creates the same little TODO application in a ton of different JavaScript MVC frameworks. This is not only a great body of example code, but it has benchmarks for doing performance comparisons between the frameworks.

The article starts by noting that the React/Om library (which the author also wrote) is 2-4x faster than Backbone in "Benchmark 1" of TodoMVC, and about 800x faster than Backbone for Benchmark 2. This is followed with a profiling graph that shows Backbone making a ton of short-running function calls, where React/Om make many fewer calls that run longer.

From what I can tell, it appears that these benchmark numbers are mainly illustrating the difference between doing direct DOM updates (as in the Backbone example) and using a library like React that batches DOM updates. I suspect that these benchmarks have little to do with Om and everything to do with React.

In particular, "Benchmark 2" (the one with the 800x speedup vs Backbone) is effectively a no-op on the DOM, so none of Om's custom shouldComponentUpdate() optimizations are coming into play here. The only thing that Om seems to be possibly contributing performance-wise to this benchmark is that it uses requestAnimationFrame() instead of the default React batching strategy, but this can be done easily with plain React too: here is a github project to do it. Unfortunately the React example for TodoMVC doesn't implement the benchmarks, but if it did I suspect the performance would be almost identical to the React/Om numbers.

The author addresses this point a moment later:
Of course you can use Backbone.js or your favorite JS MVC with React, and that's a great combo that delivers a lot of value. However, I'll go out on a limb and say I simply don't believe in event-oriented MVC systems - the flame graph above says it all. Decoupling the models from the views is only the first important step.
The argument here is that even if you use React with Backbone, Backbone is still based around change events (ie. a "push model"). Using Backbone together with React means calling React's forceUpdate() whenever a Backbone change handler fires. The author argues that the flame graph from before illustrates that making lots of little function calls whenever there is a change event is slow.

I'm not sure I buy this argument. The flame graph from before is significant because it shows lots of DOM updates. It illustrates that Backbone performs all of its DOM updates eagerly, and DOM updates are known to be one of the biggest application bottlenecks in JavaScript applications. React is fast because it aggressively batches and minimizes its DOM updates. Having Backbone fire a bunch of change handlers that call React's forceUpdate() function a lot isn't slow, because React will still wait a while to actually re-render everything and update the DOM.

Om and Immutable Data Structures

The next section describes a lot of points that are more Om-specific. Om is an immutable data structure library: a structure is never mutated once created, which means that the contents of the entire tree are captured in the identity of the object root. Or in simpler terms, if you are passed an object that is the same object (by identity) that you saw earlier, you can be assured that it hasn't changed in the meantime. This pattern has a lot of nice properties, but also generates more garbage than mutable objects.

React/Om takes advantage of this by implementing React's shouldComponentUpdate() call to optimize away render() calls (and related diff-ing) when the objects are the same. When objects are the same they are known to have the same value, which is how we know that the optimization is safe. This is particularly important for React/Om, because unlike React with mutability (ie. setState()) React/Om completely refreshes the tree from the root every time. So React/Om's shouldComponentUpdate() will return true for the root and for any paths to nodes which have changed, but can prune away diffs for any subtrees that have not changed.

I think it's worthwhile to note that React/Om does not necessarily do less work than if you used React with setState(). If you called setState() on an interior node of the virtual DOM, you would only trigger a re-render of that one component. But with React/Om, you will trigger a re-render of all components between the root and the changed component. The final DOM update will be the same in both cases, but React will have had to do more work to discover that. It's a small amount more work (and in other cases the setState() approach will require more work, possibly a lot more if you're careless), but it was an interesting and unexpected that the immutable approach isn't a strict improvement over the mutable one.

The article goes on to describe several other benefits of the functional-style immutable data structures. The biggest one as I see it is "undo for free" -- this is an undeniably powerful pattern of immutable data structures.


The article's broad claims that current JavaScript MVCs are inherently slow made a big impression on me. But when I examined the article's substance in more depth, I was not fully convinced of this. To me most of the claimed improvements boil down to: updating the DOM is slow, so use a framework that batches updates to it. React does this, and I have heard that Angular does batching to some extent too. After looking at this, I'm not convinced that immutable data structures are inherently better-performing in the browser than mutable ones. And it is hard to imagine convenient two-way data binding being built on top of immutable data structures.

That said, there is a lot to like about immutable data structures, particularly their snapshot/undo capabilities.