Archive for 2013

things i would want to know about erlang

August 10th, 2013

Earlier this year I spent a few weeks playing with Erlang. I wanted to make something out of it, but despite an encouraging start I found it too frustrating to use.

I got excited about Erlang because a lot of interesting things have been done in Erlang. Like CouchDB, RabbitMQ, Riak and so forth. Besides that, Erlang is a dynamic language and I generally find those quite nice to use.

I won't recap Erlang's selling points here - I assume you've heard them. These are my discoveries.

  • The language feels very static. Dynamic type checking and code reloading are about the only things that seem dynamic about it. There is no introspection. The build system is make. You can't change the structure of a record once the program is running. There is no meta programming. And so on and so forth. It really isn't a dynamic language at all in the sense of Python, Ruby or Javascript.
  • ...some of these static restrictions are plain weird. If you want to call a function in the same module you don't have to do anything special, but if you're going to export the function you need to declare it as an export as func/2 (where 2 is its arity), and maintaining consistency between the function definition and the export declaration is another source of bugs. The question is: what's the point of having this? The arity is a kind of statically enforced type, but you can still call the function with any arguments you want - you can even give all your functions a single parameter and then call it with a tuple and Erlang has nothing to say about what that tuple contains.
  • The data abstraction is very weak. There is no object system - all you have is records and they are compile time defined and provided to client code as header files, like in c. This is very rigid.
    If you don't like records you can just use tuples. Tuples don't have to be declared in a header file and compiled in, just start using them freely. But now you have a container with positional arguments and if you want to change the structure of the tuple you have to update all your code, because any code using this tuple has to pattern match against all of its fields. That's even worse.
  • The syntax is Prolog and that's not a good thing. I didn't even realize how much it is Prolog until I read a Prolog tutorial a few weeks ago. It may be alright for Prolog, but it has definitely not been extended in an elegant way by Erlang. The tuple syntax is probably the worst part of it, but control flow structures too are so easy to get wrong with so many different line termination characters in use.
  • Strings are just lists, and there's no way to detect whether a list merely contains integers or whether it's supposed to be a string. If you wanted to be very generous you could call this sloppy.
  • Single assignment may be a nice idea, but it makes code ugly. At first I was naming my variables things like Something = func(OriginalSomething), but then I realized I had to plan very carefully what I would name the variable to always have a descriptive name. So I abandoned that and started using Houses2 = func(Houses) and so on. That is a bit more flexible at least, but now it's like I'm maintaining a list in sorted order and if I discover that I need to add a step in between Houses3 = func(Houses4) I can either introduce Houses7 or I have to update all the indexes. This plain sucks. (And no: Haskell's solution of adding apostrophes is just a different numbering syntax, it doesn't make it any better.)
  • Dynamic typing + pattern matching can make a mess. I was looking at ibrowse, which is a very feature rich and mature library. But some of its function definitions are 3 pages long. Instead of splitting it up into several helper functions, it's simply defining the same function with many different clauses, each of which has a different signature (obviously), but also a different arity and sometimes expecting different types for those positional arguments.
    This is of no concern to the caller, because I just call the top level clause of this function and I don't see what happens behind the scenes. But what you actually end up with in the background is a call graph of essentially different functions (but with the same name), heavily recursive, that is hell to try to make sense of. And because there is no static type information, well good luck.
  • OTP is a straightjacket. OTP kind of assumes that you will be using a supervisor, that you will be using gen_server and that you will be using OTP code patterns, OTP packaging and an OTP directory layout. I found it quite hard to write a program in a single module just to try out an idea first. I would later have migrated it to OTP style, but it was hard to get the benefits of OTP without going all the way in. What is the point of a supervisor that supervises a single process without restart behavior? It's pure overhead. Yet in OTP everything is supervised, whether or not that's useful. Also, the lack of a proper data abstraction is nowhere more painful than in OTP where you're constantly passing nested tuples around and trying to figure out which part is used by the function you're calling directly, and which part is propagated as part of the message payload. If you happen to know the OTP API like the back of your hand then this is probably a non-issue, but you can say that about any awful API.
  • The tooling is low quality. It makes you think noone has made a real effort to improve it in a good 10-15 years. Like the REPL. There is one, but it sucks. It doesn't support readline. It has tab completion, but it doesn't complete all function names. You think you can explore the libraries this way, but some modules don't show up at all in the completer, or some of their functions are missing. The same goes for all the process introspection tools. They exist, but they've been programmed against Motif or something and they're terrible to use. And stack traces are formatted in a weird way that makes it hard to see exactly what the failing thing is. And if you use sasl then the actual output you care about it drowned out in a bunch of other output that you don't care about.

So yes, several very interesting ideas in Erlang, but a poor development experience. Unfortunately, most of these problems are deep in the language and the culture of Erlang and not likely to ever change. Someone could write a better REPL, but it would take a lot of community work to improve records or the syntax. There are other problems in Erlang that are probably more tractable, like a certain amount of duplication of functionality in the standard library (several competing dictionary implementations), but not even that seems to be worked on.

UPDATE: Erik Ridderby has written a response that includes a fascinating piece of history painting a nice picture of where Erlang came from and what sort of development environments it was used in.

python bytecode: modules

August 8th, 2013

If you're new here I recommend starting with part 1.

Last time we looked at statements inside function bodies. Exciting stuff, but a bit puzzling. I bet you were asking yourself: How did those functions get into that module in the first place?

Statements at module level

For some reason dis.dis does not seem to produce any output when given a module object other than showing you its functions. For a more insightful view we will use python -m dis module.py.

Let's try a really simple one first.

a = 7
b = a + a

# disassembly
  1           0 LOAD_CONST               0 (7)
              3 STORE_NAME               0 (a)

  2           6 LOAD_NAME                0 (a)
              9 LOAD_NAME                0 (a)
             12 BINARY_ADD          
             13 STORE_NAME               1 (b)
             16 LOAD_CONST               1 (None)
             19 RETURN_VALUE

The first thing to notice is the use of LOAD_NAME and STORE_NAME. We're at module level here, and LOAD_FAST/STORE_FAST only apply to local variables in a function body. But just like a function object a module stores the names it contains in a tuple which can be indexed into, as shown here.

It's a bit more obscure to get at that storage, because a module does not have a func_code attribute attached to it like a function does. But we can create a module object ourselves and see what it contains:

code = compile("a = 7; b = a + a", "module.py", "exec")
print code.co_names
# ('a', 'b')

And there's our storage. The module has various other co_* attributes which we won't go into right now.

Also worth noting: modules return None like functions do, which seems a bit redundant given that there isn't a way to capture that return value: value = import os is not valid syntax. And module imports feel like statements, not like expressions.

Functions

Think about this first: when you import a module which contains two functions, what do you expect its namespace to contain? Those two names bound to their function objects, of course! See, that was not a trick question!

def plus(a, b):
    return a + b

def main():
    print plus(2, 3)

# disassembly
  1           0 LOAD_CONST               0 (<code object plus at 0xb73f4920, file "funcs.py", line 1>)
              3 MAKE_FUNCTION            0
              6 STORE_NAME               0 (plus)

  4           9 LOAD_CONST               1 (<code object main at 0xb73f47b8, file "funcs.py", line 4>)
             12 MAKE_FUNCTION            0
             15 STORE_NAME               1 (main)
             18 LOAD_CONST               2 (None)
             21 RETURN_VALUE

And so it is. Python will not give any special treatment to functions over other things like integers. A function definition (in source code) becomes a function object, and its body becomes a code object attached to that function object.

We see in this output that the code object is available. It's loaded onto the stack with LOAD_CONST just like an integer would be. MAKE_FUNCTION will wrap that in a function object. And STORE_NAME simply binds the function object to the name we gave the function.

There is so little going on here that it's almost eerie. But what if the function body makes no sense?? What if it uses names that are not defined?!? Recall that Python is a dynamic language, and that dynamism is expressed by the fact that we don't care about the function body until someone calls the function! It could literally be anything (as long as it can be compiled successfully to bytecode).

It's enough that the function object knows what arguments it expects, and that it has a compiled function body ready to execute. That's all the preparation we need for that function call. No, really!

Classes

Classes are much like functions, just a bit more complicated.

class Person(object):
    def how_old(self):
        return 5

# disassembly
  1           0 LOAD_CONST               0 ('Person')
              3 LOAD_NAME                0 (object)
              6 BUILD_TUPLE              1
              9 LOAD_CONST               1 (<code object Person at 0xb74987b8, file "funcs.py", line 1>)
             12 MAKE_FUNCTION            0
             15 CALL_FUNCTION            0
             18 BUILD_CLASS         
             19 STORE_NAME               1 (Person)
             22 LOAD_CONST               2 (None)
             25 RETURN_VALUE

Let's start in the middle this time, at location 9. The code object for the entire class has been compiled, just like the function we saw before. At module level we have no visibility into this class, all we have is this single code object.

With MAKE_FUNCTION we wrap it in a function object and then call that function using CALL_FUNCTION. This will return something, but we can't tell from the bytecode what kind of object that is. Not to worry, we'll figure this out somehow.

What we know is that we have some object at the top of the stack. Just below that we have a tuple of the base classes for our new class. And below that we have the name we want to give to the class. With all those lined up, we call BUILD_CLASS.

If we peek at the documentation we can find out that BUILD_CLASS takes three arguments, the first being a dictionary of methods. So: methods_dict, bases, name. This looks pretty familiar - it's the same inputs needed for the __new__ method in a metaclass! At this point it would not be outlandish to suspect that the __new__ method is being called behind the scenes when this opcode is being executed.

python bytecode: single scope

August 6th, 2013

Python bytecode, meet the language behind the language! It's a language like any other, with its own syntax, semantics, design patterns and pitfalls. It's even better than Python, because it's executed directly, not compiled to some intermediate easy-enough-to-interpret language. So if those bytecode compilation times are bugging you, you know what to do.

As you know, CPython is a stack based VM. Okay, what does that mean? It means that if you examine a simple Python expression it will generally follow this pattern:

  1. Push (load) the operand on the stack. This can be a constant or a variable.
  2. Perform the operation on the operand.
    This will consume the operand and replace it on the stack with the result of the operation.
  3. Store the value in a variable (if there is an assignment) or discard it (if there isn't).

If the operation were binary you would push two operands on the stack instead, and so on. And each of the steps is a bytecode instruction, so one "atomic" looking Python expression becomes a handful of bytecode instructions. The stack is like a desk - you put something down that you're going to use, perform the work, and clean up the desk afterwards (just like we all do in real life, amirite?).

An assignment statement

Not convinced? import dis and follow me, then.

def main(a):
    # We will disassemble the body of this function
    a = a + 7

import dis
dis.dis(main)

Disassembly of main:
  2           0 LOAD_FAST                0 (a)
              3 LOAD_CONST               1 (7)
              6 BINARY_ADD          
              7 STORE_FAST               0 (a)
             10 LOAD_CONST               0 (None)
             13 RETURN_VALUE

Woah, what's this? The 2 in the left column is the line number of where this function starts. It's actually the first line in the module, so that would be line 0 or line 1 depending on your counting system, but close enough.

Let's dig in. We have 6 instructions just to do a = a + 7, seems a little much.

The first instruction is LOAD_FAST 0. LOAD_FAST operates on local variables, so it will load the zeroth local variable from somewhere onto the stack. You can see dis printing the value of the variables/constants in parentheses in the right column. The value is a.

The next instruction is LOAD_CONST 1, whose value is 7. We are loading the first constant onto the stack. (Which seems odd, as I only spot one constant in this function, what's the zeroth constant then? Patience, Robinson, we'll get there soon.)

Next, we have BINARY_ADD, which does not take an argument. Conveniently, it finds two operands ready and waiting on the stack, however. It pops them both off, performs the operation, and pushes the result onto the stack again.

Finally, we have STORE_FAST 0, which stores whatever is on the stack into the zeroth local variable, whose name is a. So we've accounted for our Python code and everything looks rosy.

But the bytecode compiler has a dilemma. This is a function it's compiling, and functions are supposed to have return values, even if this one doesn't. The caller of the function will be pissed if he doesn't get a return value.

A cunning plan. Why not just return nothing? Push the value None on the stack (by golly, there's our zeroth constant!), then return it.

A branch

Okay, loading variables and literals, performing operations. *shrug* What if you had a branch, tough guy, would you indent your bytecode, huh?

def main(a):
    if condition:
        a = 7
    else:
        a = 9

Disassembly of main:
  2           0 LOAD_GLOBAL              0 (condition)
              3 POP_JUMP_IF_FALSE       15

  3           6 LOAD_CONST               1 (7)
              9 STORE_FAST               0 (a)
             12 JUMP_FORWARD             6 (to 21)

  5     >>   15 LOAD_CONST               2 (9)
             18 STORE_FAST               0 (a)
        >>   21 LOAD_CONST               0 (None)
             24 RETURN_VALUE

Well, no. And hang tight, because this is where Python bytecode becomes a little assembly-ish.

We load the condition variable and perform the check. If true we just continue to the next instruction, but if false we jump to location 15, which is the body of the else block.

a) if we're executing the if case, we perform the assignment, then jump forward by 6 locations to location 21, which is the end of the function (and return None).

b) if we're executing the else case, we perform the assignment and just continue on, returning None.

Object lookups

So far we have looked at some very basic operations, and the opcodes are pretty similar to what you would find when compiling the same code in C and looking at the assembly. (Well, except that we are still in the world of dynamic typing and BINARY_ADD has _no idea_ what kind of things it will be trying to add until it actually tries to perform the operation and only then! discovers the types of the operands.)

Let's try something more high level - like operations on objects. Two of the most common things we do in Python are: indexing into objects (for lists and dicts), and looking up attributes on them.

def main(obj):
    obj[x]
    obj.x

Disassembly of main:
  2           0 LOAD_FAST                0 (obj)
              3 LOAD_GLOBAL              0 (x)
              6 BINARY_SUBSCR       
              7 POP_TOP             

  3           8 LOAD_FAST                0 (obj)
             11 LOAD_ATTR                0 (x)
             14 POP_TOP             
             15 LOAD_CONST               0 (None)
             18 RETURN_VALUE

Why that seems surprisingly concise! The index operation aside (a single opcode), attribute lookup is a complicated piece of logic that involves looking at obj.__dict__, looking at the class dict (and base class dicts), calling __getattribute__ and possibly __getattr__. All of that in a single opcode? That's value for money in my book.

At this point it should dawn on you that bytecode compilation does not actually compile the program very much. It's basically rewriting the same thing in bytecode, and that's why the compilation is so fast.

Approaches to interpretation

So why bother? Let's think about our options.

  1. Parse and interpret Python source code on-the-fly.
    Impractical, because the code you need to execute determines how many tokens you need to parse. An assignment may only need the rest of the tokens on the line, but a class definition needs the whole class body. Noone actually does this.
  2. Parse the source code to an Abstract Syntax Tree (AST) and write the interpreter as an AST walker.
    Possible, but it makes the interpreter a bit complicated. It has to remain in sync with the structure of the AST at all times. Ruby MRI 1.8 uses this approach.
  3. Parse the source code to an AST, compile that to bytecode, interpret the bytecode.
    Most interpreted languages use this approach. It decouples the interpreter from the AST, so instead of traversing a tree you're executing a linear stream of instructions.

python timings

April 14th, 2013

On the one hand, we measure database query latency in milliseconds. On the other hand, a read from L1 cache costs less than a nanosecond. That got me thinking that there is a pretty big spectrum in between the two. I wonder how much time typical language constructs cost. Just as a reminder, here is the typical list of important timings:

0.5 ns        read from L1 cache
            1   ns        execute cpu instruction
            7   ns        read from L2 cache
          100   ns        read from memory
       20,000   ns        transmit over local network
    8,000,000   ns        read from disk
  150,000,000   ns        transmit over the internet Europe -> US
1,000,000,000   ns        one second

There happens to be a really easy way to do a quick and dirty measurement using ipython, with its built-in timing feature. It takes an expression that it will execute a number of times, depending on how long it takes, with an upper bound in seconds. So for really trivial expressions you get a large number of repetitions:

In [66]: %timeit 1+2
10000000 loops, best of 3: 20.7 ns per loop

The catch is that timeit expects an expression, so the simplest way to get around that is to make every test a function call, and in there we can run arbitrary expressions and statements alike. The baseline will then be a function with an empty body.

Here are the results from my cpython 2.7.3:

5 ns        assignment
            4 ns        integer_addition
           10 ns        string_concat
            5 ns        string_interpolate
           35 ns        dict_lookup
           77 ns        list_comprehension

           22 ns        branch
        1,095 ns        try_catch

       86,895 ns        create_class        
           97 ns        instantiate_class
          135 ns        call_method
          105 ns        call_function

          217 ns        get_current_time
        1,745 ns        get_current_date

Clearly this leaves a lot to be desired from a methodological standpoint. The reference list of latencies is not scaled to my laptop in particular, plus we are adding the overhead of a function call to every measurement (and then trying to subtract it out), but at least it's constant across all measurements. At best these numbers are a rough indication of how much things cost, but that's good enough for our purpose.

Finally, here is the code:

def call_function():  # 105ns
    pass

def create_class():  # 87us
    class C(object):
        pass

class D(object):
    def meth(self):
        pass

def instantiate_class():  # 202ns
    D()

d = D()
def call_method():  # 240ns
    d.meth()

def assignment():  # 110ns
    a = 1

def branch():  # 127ns
    if True:
        pass

def try_catch():  # 1.2us
    try:
        raise Exception
    except:
        pass


def integer_addition():  # 109ns
    1 + 2

def string_concat():  # 115ns
    "a" + "b"

def string_interpolate():  # 110ns
    "a%s" % "b"

d = {'a': 1}
def dict_lookup():  # 140ns
    d['a']

l = []
def list_comprehension():  # 182ns
    [x for x in l]

import time
def get_current_time():  # 322ns
    time.time()

from datetime import datetime
def get_current_date():  # 1.85us
    datetime.now()

l'étranger

February 24th, 2013

J'ai eu une expérience bien étrange. J'allais au supermarché, j'étais à peine sorti du bâtiment, lorsqu'un homme qui avait arrêté sa voiture dans la rue m'interpella. "Excusez moi, est-ce que vous parlez italien ?". Il n'y avait personne d'autre dans la rue. Bon, j'étais trop curieux pour faire semblant que non. Je lui ai dit oui et je crois qu'il était si étonné par ma réponse que moi par sa demande. Il m'a raconté qu'il venait de travailler toute la semaine dans une exposition des marchandises, à quelques rues de ma maison effectivement. Il représentait la société Emporio Armani et il était là pour présenter des nouvelles montres de cette marque. Maintenant, il devait rentrer en Italie, mais pas forcément avec ses marchandises. Il m'a dit que c'étaient des échantillons et qu'on ne pouvait pas les vendre régulièrement. Alors, comme j'étais si sympa et que je parlais si bien l'italien en n'étant même pas italien (je lui ai dit ça), il aurait été bien heureux de m'offrir ses montres. Il m'en a montrées plusieurs, elles n'étaient pas mal effectivement. Il m'a dit "celle-ci vaut 700 euros, celle-là 400 euros". Il m'a dit qu'il voulait acheter des cadeaux à l'aéroport et si je voulais lui proposer 400 pour une, il m'aurait données toutes, c'est à dire six ou bien douze comme il y en avait deux sachets pleins.

J'ai pu lui dire que 400 cent euros pour une montre c'est du délire et qu'à la limite j'ai pu lui offrir 4 euro, mais je pensais qu'il se serait senti insulté et comme il était très sympa je n'ai rien dit. En revanche j'ai dit que je ne porte pas de montre depuis plus que 10 ans, donc ça ne m'intéresse pas du tout. Comme ça, on s'est dit adieux et il est parti dans sa voiture à matricule française.

Plus tard, j'ai réfléchi. Est-ce que c'était une arnaque ? Si ces montres valaient vraiment ce qu'il m'a dit, toutes ces marchandises dans sa voiture ont dû valoir des milliers d'euros. Alors pourquoi les offrir à un étranger ? Pourquoi ne pas les vendre lui même sur internet s'il n'y avait pas d'autres moyens ? Ça n'a aucun sens. Dans des circonstances semblables, je dis toujours non à quoi que ce soit parce que je n'arrive pas à faire les calculs sur le coup.