Archive for the ‘code’ Category

two weeks of rust

January 10th, 2016

Disclaimer: I'm digging Rust. I lost my hunger for programming from doing too many sad commercial projects. And now it's back. You rock, Rust!

I spent about two weeks over the Christmas/New Year break hacking on emcache, a memcached clone in Rust. Why a memcached clone? Because it's a simple protocol that I understand and is not too much work to implement. It turns out I was in for a really fun time.

UPSIDES

The build system and the package manager is one of the best parts of Rust. How often do you hear that about a language? In Python I try to avoid even having dependencies if I can, and only use the standard library. I don't want my users to have to deal with virtualenv and pip if they don't have to (especially if they're not pythonistas). In Rust you "cargo build". One step, all your dependencies are fetched, built, and your application with it. No special cases, no build scripts, no surprising behavior *whatsoever*. That's it. You "cargo test". And you "cargo build --release" which makes your program 2x faster (did I mention that llvm is pretty cool?)

Rust *feels* ergonomic. That's the best word I can think of. With every other statically compiled language I've ever used too much of my focus was being constantly diverted from what I was trying to accomplish to annoying little busy work the compiler kept bugging me about. For me Rust is the first statically typed language I enjoy using. Indeed, ergonomics is a feature in Rust - RFCs talk about it a lot. And that's important, since no matter how cool your ideas for language features are you want to make sure people can use them without having to jump through a lot of hoops.

Rust aims to be concise. Function is fn, public is pub, vector is vec, you can figure it out. You can never win a discussion about conciseness because something will always be too long for someone while being too short for someone else. Do you want u64 or do you want WholeNumberWithoutPlusOrMinusSignThatFitsIn64Bits? The point is Rust is concise and typeable, it doesn't require so much code that you need an IDE to help you type some of it.

Furthermore, it feels very composable. As in: the things you make seem to fit together well. That's a rare quality in languages, and almost never happens to me on a first project in a new language. The design of emcache is actually nicely decoupled, and it just got that way on the first try. All of the components are fully unit tested, even the transport that reads/writes bytes to/from a socket. All I had to do for that is implement a TestStream that implements the traits Read and Write (basically one method each) and swap it in for a TcpStream. How come? Because the components provided by the stdlib *do* compose that well.

But there is no object system! Well, structs and impls basically give you something close enough that you can do OO modeling anyway. It turns out you can even do a certain amount of dynamic dispatch with trait objects, but that's something I read up on after the fact. The one thing that is incredibly strict in Rust, though, is ownership, so when you design your objects (let's just call them them that, I don't know what else to call them) you need to decide right away whether an object that stores another object will own or borrow that object. If you borrow you need to use lifetimes and it gets a bit complicated.

Parallelism in emcache is achieved using threads and channels. Think one very fast storage and multiple slow transports. Channels are async, which is exactly what I want in this scenario. Like in Scala, when you send a value over a channel you don't actually "send" anything, it's one big shared memory space and you just transfer ownership of an immutable value in memory while invalidating the pointer on the "sending" side (which probably can be optimized away completely). In practice, channels require a little typedefing overhead so you can keep things clear, especially when you're sending channels over channels. Otherwise I tend to get lost in what goes where. (If you've done Erlang/OTP you know that whole dance of a tuple in a tuple in a tuple, like that Inception movie.) But this case stands out as atypical in a language where boilerplate is rarely needed.

Macros. I bet you expected these to be on the list. To be honest, I don't have strong feelings about Rust's macros. I don't think of them as a unit of design (Rust is not a lisp), that's what traits are for. Macros are more like an escape hatch for unpleasant situations. They are powerful and mostly nice, but they have some weird effects too in terms of module/crate visibility and how they make compiler error messages look (slightly more confusing I find).

The learning resources have become very good. The Rust book is very well written, but I found it a tough read at first. Start with Rust by example, it's great. Then do some hacking and come back to "the book", it makes total sense to me now.

No segfaults, no uninitialized memory, no coercion bugs, no data races, no null pointers, no header files, no makefiles, no autoconf, no cmake, no gdb. What if all the problems of c/c++ were fixed with one swing of a magic wand? The future is here, people.

Finally, Rust *feels* productive. In every statically compiled language I feel I would go way faster in Python. In Rust I'm not so sure. It's concise, it's typeable and it's composable. It doesn't force me to make irrelevant nit picky decisions that I will later have to spend tons of time refactoring to recover from. And productivity is a sure way to happiness.

DOWNSIDES

The standard library is rather small, and you will need to go elsewhere even for certain pretty simple things like random numbers or a buffered stream. The good news is that Rust's crates ecosystem has already grown quite large and there seem to be crates for many of these things, some even being incubated to join the standard library later on.

While trying to be concise, Rust is still a bit wordy and syntax heavy with all the pointer types and explicit casts that you see in typical code. So it's not *that easy* to read, but I feel once you grasp the concepts it does begin to feel very logical. I sure wouldn't mind my tests looking a bit simpler - maybe it's just my lack of Rust foo still.

The borrow checker is tough, everyone's saying this. I keep running into cases where I need to load a value, do a check on it, and then make a decision to modify or not. Problem is the load requires a borrow, and then another borrow is used in the check, which is enough to break the rules. So far I haven't come across a case I absolutely couldn't work around with scopes and shuffling code around, but I wouldn't call it fun - nor is the resulting code very nice.

Closures are difficult. In your run-of-the-mill language I would say "put these lines in a closure, I'll run them later and don't worry your pretty little head about it". Not so in Rust because of move semantics and borrowing. I was trying to solve this problem: how do I wrap (in a minimally intrusive way) an arbitrary set of statements so that I can time their execution (in Python this would be a context manager)? This would be code that might mutate self, refers to local vars (which could be used again after the closure), returns a value and so on. It appears tricky to solve in the general case, still haven't cracked it.

*mut T is tricky. I was trying to build my own LRU map (before I knew there was a crate for it), and given Rust's lifetime rules you can't do circular references in normal safe Rust. One thing *has to* outlive another in Rust's lifetime model. So I started hacking together a linked list using *mut T (as you would) and I realized things weren't pointing to where I thought they were at all. I still don't know what happened.

The builder pattern. This is an ugly corner of Rust. Yeah, I get that things like varargs and keyword arguments have a runtime overhead. But the builder pattern, which is to say writing a completely separate struct just for the sake of constructing another struct, is pure boilerplate, it's so un-Rust. Maybe we can derive these someday?

Code coverage. There will probably be a native solution for this at some point. For now people use a workaround with kcov, which just didn't work at all on my code. Maybe it's because I'm on nightly? Fixed!

---

So there you have it. Rust is a fun language to use, and it feels like an incredibly well designed language. Language design is really hard, and sometimes you succeed.

a little help with bitwise operators

August 3rd, 2015

Binary numbers are easy, right? You just do stuff with bits.

But invariably whenever I code C I can never remember how to actually set a bit or test a bit, I keep getting and and or confused.

So I made a cheat sheet I can look up any time. These are the key ones:

All the others are available on the cheat sheet.

so do you know how your program executes?

August 2nd, 2015

The answer is no, and here's why you shouldn't feel peer pressured into saying yes.

I was reading about disassembly recently when I came across this nugget:

(...) how can a disassembler tell code from data?

The problem wouldn't be as difficult if data were limited to the .data section of an executable and if executable code were limited to the .code section of an executable, but this is often not the case. (...) A technique that is often used is to identify the entry point of an executable, and find all code reachable from there, recursively. This is known as "code crawling".

(...)

The general problem of separating code from data in arbitrary executable programs is equivalent to the halting problem.

Well, if we can't even do *that*, what can we do?

We start with a program you wrote, awesome. We know that part. We compile it. Your favorite language constructs get desugared into very ordinary looking code - all the art you put into your program is lost! Abstract syntax tree, data flow analysis, your constants are folded and propagated, your functions are inlined. By now you wouldn't even know that it's the same program, and we're still in "high level language" territory (probably some intermediate language in the compiler). Now we get basic blocks and we're gonna lay them out in a specific order. This is where the compiler tries to play nice with the branch prediction in your cpu. Your complicated program aka "my ode to control flow" now looks very flat - because it's assembly code. And at last we assemble into machine code - the last vestiges of intelligent life (function names, variable names, any kind of symbolic information) are lost and become just naked memory addresses.

Between your program and that machine code... so many levels. And at each level there are ways to optimize the code. And all of those optimizations have just happened while you stared out the window just now.

So I started thinking about how programs execute. The fact is that predicting the exact execution sequence of a program, even in C, even in C 101 (no pointers, no threading) is basically impossible. Okay, I'm sure it's possible, but I'd have to know the details of my exact cpu model to have a chance.

I need to know how big the pre-fetch cache is. I bet there are some constants that control exactly how the out of order execution engine works - I need to know those. And I need to know the algorithms that are used there (like branch prediction, remember?). I need to know... oh shoot, multicore! Haha, multicore is a huge problem.

Basically, I need to know exactly what else is running on my system at this very time, because that's gonna wreak havoc with my caches. If my L1 gets hosed by another process that's gonna influence a load from memory that I was just about do. Which means I can't execute this instruction I was going to. So I have to pick some other instructions I have lying around and execute those speculatively while we wait for that delivery truck to return all the way from main memory.

Speculatively, you say? Haha yes, since we have these instructions here we'll just go ahead and execute them in case it turns out we needed to do that. Granted, a lot of what a cpu does is stuff like adding numbers, which is pretty easy to undo. "Oops, I 'accidentally' overwrote eax." I guess that addition never happened after all.

And then hyper threading! Do you know how hyper threading works? It's basically a way of saying "this main memory is so damn slow that I can simulate the execution of *two* different programs on a single core and noone's really gonna notice".

This whole thing gives rise to a philosophical question: what is the truth about your program? Is it the effect you observe based on what you read from memory and what you see in the registers (ie. the public API of the cpu)? Or is it the actual *physical* execution sequence of your instructions (the "implementation details" you don't see)?

I remember when virtual machines were starting to get popular around 2000 and there was a lot of discussion about whether they were a good thing - "think about the performance implications". Hell, our so-called physical machines have been virtual for a very long time already!

It's just that the cpu abstraction doesn't seem to leak very much, so you think your instructions are being executed in order. Until you try to use threads. And then you have to ask yourself the deep existential question: what is the memory model of my machine anyway? Just before you start putting memory barriers everywhere.

So no, you don't. But none of your friends do either (unless they work for Intel).

do you know c?

November 13th, 2014

In discussions on programming languages I often see C being designated as a neat, successful language that makes the right tradeoffs. People will go so far as to say that it's a "small language", it "fits in your head" and so on.

I can only imagine that people saying these things have forgotten how much effort it was to really learn C.

I've seen newbies ask things like "I'm a java coder, what book should I use to learn C?" And a lot people will answer K&R. Which is a strange answer, because K&R is a small book (to further perpetuate this idea that it's a small language), is not exactly pedagogical, and still left me totally confused about C syntax.

In practice, learning C takes so much more than that. If you know C the language then you really don't know anything yet.

Because soon enough you discover that you also need to know the preprocessor and macros, gcc, the linker, the loader, make and autoconf, libc (at least what is available and what is where - because it's not organized terribly well), shared libraries and stuff like that. Fair enough, you don't need it for Hello World, but if you're going to do systems programming then it will come up.

For troubleshooting you also need gdb and basically fundamental knowledge of your machine architecture and its assembly language. You need to know about memory segments and the memory layout and alignment of your datastructures and how compiler optimizations affect that. You will often use strace to discover how the program actually behaves (and so you have to know system calls too).

Much later, once you've mastered all that, you might chance upon a slide deck like Deep C whose message basically is that you don't understand anything yet. What's more terrifying is that the fundamental implication at play is: don't trust the abstractions in the language, because when things break you will need to know how it works under the hood.

In a high level language, given effort, it's possible to design an API that is easy to use and hard to misuse and where doing it wrong stands out. Not so in C where any code is always one innocuous looking edit away from a segfault or a catastrophic security hole.

So to know C you need all of that. But that's mostly the happy path. Now it's time to learn about everything that results in undefined behavior. Which is the 90% of the iceberg below the surface. Whenever I read articles about undefined behavior I'm waiting for someone to pinch me and say the language doesn't actually allow that code. Why would "a = a++;" not be a syntax error? Why would "a[i]" and "i[a]" be treated as the same when syntactically they so clearly aren't?

Small language? Fits in your head? I don't think so.

Oh, and once you know C and you want to be a systems programmer you also need to know Posix. Posix threads, signals, pipes, shared memory, sync/async io, ... well you get the idea.

adventures in project renovation

March 9th, 2014

I'm inspired by how many great Python libraries there are these days, and how easy it is to use them. requests is the canonical example, and marks a real watershed moment, but there are many others.

It made me think back on various projects that I've published over the years and not touched in ages. I've been considering them more or less "complete". My standards for publishing projects used to be: write a blog entry, include the code, done. That was okay for simple scripts. Later on I started putting code on berlios.de and sourceforge.net. At some point github emerged and became the de facto standard, so I started using that too.

Fast forward to 2014 and the infrastructure available to open source projects has been greatly enriched. And with it, the standards for what makes a decent project have evolved. Jeff Knupp wrote a fabulous guide on this.

I decided to pick a simple case study. ansicolor is a single module whose origins I can trace back to 2008. I've seen the core functionality present in any number of codebases, because it's just so easy to hammer out some code for this and call it a day. But I never found it in a reusable form, so I decided to make it a separate thing that I could at least reuse between my own projects.

These are the steps a project is destined to pass through:

  • python3 support
  • pypi package + wheel!
  • readme that covers installation and "getting started"
  • tests + tox config
  • travis-ci hook
  • flake8 integration and fixing style violations
  • docs + Read the Docs hook

Not a single feature was added to ansicolor, not a single API was changed. Only two things really changed at the level of the code: exports were tidied up and docstrings were added. Python3 support was added too, but it was so trivial you'd have to squint to notice it.

The biggest stumbling block was actually writing the docs. As an implementor you tend to look at code in a completely different light than you do as a user of that code. Before starting on this I was thinking about how the API is a bit awkward in some places and could be improved. And how some of the functionality caters to a very narrow use case and maybe should be removed or to moved to a "contrib"-like place.

But as a potential user of a library that I just discovered I don't care about any of that. I want to be able to "pip install" it. I want to have some quickstart documentation so I can have running code in 2 minutes. That's how long I'll typically spend deciding whether this code is worth my time at all, so if the implementor is busy polishing the API before even putting out a pypi package they're wasting their time.

There is an interesting cognitive dissonance at play here. As an implementor I tend to think that the darkest corners of my code are those that most need documenting. Those are the ones most likely to bite someone. The easy stuff anyone can figure out. But as a user that's not how I see it at all. It's precisely the simplest functionality that most needs explaining, because most users have simple needs. If you do a good job documenting that you can make lots of people productive. By contrast, the complicated features have a small audience. An audience that's more sophisticated and more likely to help themselves by reading the code if need be.

Then there are the tools. I always found sphinx a bit fiddly. It's not really obvious how to get what you want, and it's permissive enough not to complain, so it takes a fair bit of doc hunting to discover how other projects do it. PyPI has a more conservative rst parser than github, so if you give it syntax it doesn't accept it renders your page in plain text. I ended up doing a number of releases where only the readme changed slightly to debug this. Read the Docs works well, but I couldn't figure out how to make it build from a development branch. It seems to only want to build from a tag regardless of the branches you select, so that too inflated the number of releases.

It takes a bit of time to renovate a project, but it's all fairly painless. All these tools have reached a level of maturity that makes them very nice to use.