Rethinkdbdash for Node.js 0.10.26 posted on 29 March 2014

I just released two packages of rethinkdbdash

  • rethinkdbdash for Node.js 0.10.26
  • rethinkdbdash-unstable for Node.js 0.11.10 (and 0.11.9)

I wrote rethinkdbdash two months ago to improve the syntax of ReQL in the Node.js driver by providing

  • promises (and testing them with generators)
  • a native/automatic connection pool

While you cannot use generators with the stable version of Node.js, the connection pool is a reason good enough to make this driver available for the stable Node.js. You basically never have to deal with connections.

For those who want to know what the syntax looks like, here it is:

var r = require('rethinkdbdash')();

r.table("comments").get("eef5fa0c").run().then(function(result) {
}).error(function(error) {

Compared to the one with the official driver:

var r = require('rethinkdb');

r.connect({}, function(error, connection) {
    r.table("comments").get("eef5fa0c").run(function(error, result) {
        if (err) {
        else {

Note: If you were using rethinkdbdash with Node 0.11.10, please switch to rethinkdbdash-unstable.

First experience on Digital Ocean - Updating Archlinux posted on 14 March 2014

I have been using a dedicated server at OVH for a few years now, and the quality of their service has become worse and the last incidents prompted me to look for a new server.
Digital Ocean claims that they are user-friendly and since it is quite cheap, I just gave it a try.

Subscribing, setting up 2 factor authentification, starting a droplet was a blast. I picked Archlinux, and less than one minute after, my droplet was up and running.

The Arch image is quite old (June 2013) and updating the system is a little more tricky than just running pacman -Syu.
These instructions were written a few hours after the installation, so they may be slightly inaccurate.

First, update the whole system. Because Arch merged /bin, /sbin into /usr/bin and /lib into /usr/lib, you cannot just run pacman -Syu. Run instead:

pacman -Syu --ignore filesystem,bash
pacman -S bash
pacman -Su

Then remove netcfg and install netctl.

pacman -R netcfg
pacman -S netctl

Run ip addr to see your interface. In my case it was enp0s3

Create a config file /etc/netctl/enp0s3 with

DNS=('', '')

Enable the interface

netctl enable enp0s3

Then update the kernel via the web interface.

The network interface is going to change to something like ens3. Move /etc/netctl/enp0s3 to /etc/netctl/ens3 and change the Interface field.

Update /lib/systemd/system/sshd.service to be sure that the ssh daemon doesn't fail on boot

Description=OpenSSH Daemon

ExecStart=/usr/bin/sshd -D
ExecReload=/bin/kill -HUP $MAINPID


Reboot and your server should be up to date.

And that's it for updating Arch. It was not the easiest updates, but nothing impossible. It would have been nice if Digital Ocean was provided an up to date Arch image though.

Note: You can probably directly set the network interface to ens3.
In the worst case you can still access your machine with Digital Ocean's web shell and fix things there.

Archlinux on a OVH kimsufi server posted on 15 February 2014

The Archlinux distribution installed on OVH kimsufi servers come with a custom kernel.

While it is not super-old, it is still not up to date (at the time of writing, OVH is using 3.10.9 while Arch comes with 3,12.9).

Installing Arch from scratch seems to be doable, but is probably some work. Another way to have a recent kernel is just to use OVH template and then swap the kernel. For that you just need to:

  • Install Arch from the web interface
  • Update /etc/pacman.d/mirrorlist with normal values (look at mirrorlist.pacorig)
  • Generate a new grub conf with
sudo grub-mkconfig -o /boot/grub/grub.cfg
  • Make sure that the entry with the normal kernel in /boot/grub/grub.cfg is the first one.
  • Reboot

Then you get:

michel@ks******:~$ uname -r

Paving the way for the new Nodejs posted on 27 January 2014

ReQL, the RethinkDB Query Language, embeds into your programming language, making it pleasant to write queries.
The JavaScript driver is a great implementation of ReQL, but it forces users to deal with callbacks making the code cumbersome.

Recently Nodejs announced that the next stable release, 0.12.0, is imminent. The biggest feature in Nodejs 0.12.0 most developers are looking forward to is generators. Since generators remove the need for cumbersome callback code, I decided to take the opportunity to write a new callback-free RethinkDB driver from scratch.

I wrote this driver taking into consideration what people were building with the current one.

  • Promises
    A few projects wrapped the RethinkDB driver with different libraries -- rql-promise, reql-then
    The new driver works with promises natively.

  • Connection pool
    Many ORMs (reheat, thinky, etc.) and connectors (sweets-nougat, waterline-rethinkdb, etc.) are implementing their own connection pools on top of the driver.
    The new driver provides a native connection pool without having to manually acquire and release a connection.

  • More accessible
    The current JavaScript driver is buried in RethinkDB main repository, is written in CoffeeScript and its tests are designed to run for all official drivers. In the end, it is hard to contribute to the driver.
    The new driver has its own repository, is written in full JavaScript, and tests are run with mocha (and on wercker).

The result of all these considerations is rethinkdbdash, which is now, as far as I know, feature complete and working.

How is rethinkdash better than the official JavaScript driver? Let's look at a concrete example.

Suppose you have two tables with the following schemas:

// Table posts
  id: String,
  title: String,
  content: String

// Table comments
  id: String,
  idPost: String, // id of the post
  comment: String

And suppose you want to fetch all the posts of your blog with their comments and retrieve them with the following format:

// Results
  id: String,
  title: String,
  content: String,
  comments: [ {id: String, idPost: String, comment: String}, ... ]

With ReQL you can retrieve everything in one query using a subquery[1].

This is how you currently do it with the old driver.

var r = require("rethinkdb");
// Code for Express goes here...

function getPosts(req, res) {
  r.connect({}, function(error, connection) {
    if (error) return handleError(error);

    r.db("blog").table("posts").map(function(post) {
      return post.merge({
        comments: r.db("blog").table("comments").filter({idPost: post("id")}).coerceTo("array")
    }).run(connection, function(error, cursor) {
      if (error) return handleError(error);

      cursor.toArray(function(error, results) {
        if (error) return handleError(error);


Now look how wonderful the code looks with new driver.

var r = require("rethinkdbdash")();
// Code for Koa goes here...

function* getPosts() {
    var cursor = yield r.db("blog").table("posts").map(function(post) {
      return post.merge({
        comments: r.db("blog").table("comments").filter({idPost: post("id")}).coerceTo("array")
    var results = yield cursor.toArray();
    this.body = JSON.stringify(results);
  catch(error) {
    return handleError(error);

What is awesome here, is:

  • There are no callbacks.
  • You do not need to open/provide/close a connection.

Take a look at the usual Todo example built with AngularJS, Koa end Rethinkdbdash.

We have to wait for the stable release of Node before the new driver can become mainstream. In the meantime, you can build Nodejs 0.11.10 from source to play with rethinkdbdash[2].
Once people use it for a bit and it is better tested, it should become the official driver.

Feedback, pull requests and comments are welcome!

[1] The anonymous function in map is parsed and send to the server.
Read all about lambda functions in RethinkDB queries if that piques your interest.

[2] You need to build from source or you may get errors like

node: symbol lookup error: /some/path/protobuf.node: undefined symbol: _ZN4node6Buffer3NewEm

Quick hack - Sound volume on Leapcast posted on 05 January 2014

Leapcast currently does let you change the sound volume with Pandora (see the source for CastPlatform).

I did a quick/dirty hack to change the sound volume on my computer with the pactl command. That works only if you are running pulseaudio (tested only with Linux for now).

This commit let you change the sound volume (if you are running pulseaudio).

You may have to set the --pulse argument to make it work. Run pacmd list to find your running pulseaudio server.
You can provide the index (an integer) or the name of the server (like alsa_output.pci-0000_00_1b.0.analog-stereo).

Happy New Year posted on 01 January 2014

I wish my family, friends, acquaintances, and all the people in the world a happy new year!

A list of things to do in 2014:

  • Play Go again
  • Draw more
  • Give more time and money to charities
  • Keep riding
  • Play badminton again
  • Find interesting things to learn

Pandora on leapcast posted on 15 December 2013

I looked for a way to start/control Pandora on my desktop computer from my phone.

Googling around I found about Pianobar and Pianobar Remote.

Pianobar Remote sends commands to Pianobar via ssh, so it requires some credentials to ssh in your machine. I am somehow not a big fan of granting an app access to my computer, especially when it is not open source, so Pianobar/Pianobar Remote was not a viable setup for me.

I then found about Leapcast, a ChromeCast emulation app. However Leapcast does not support Pandora.

I hacked my way around and now it works enough for me to start/switch music from my couch while sipping a cup of coffee and reading a book.

What's working now:

  • Start/stop Pandora
  • Play, skip, vote

What doesn't work:

  • Sound control

I also added a little thing to make Leapcast discoverable for only a set of ips. That's useful if you have roomates that love pranks :)

See the pull request for more info.
Edit: Pull request merged :)

I also wrote a little file for systemd.
Content of /usr/lib/systemd/system/leapcast.service:

#  This file is for leapcast.

Description=Start leapcast

ExecStart=/usr/bin/leapcast --chrome /usr/bin/chromium --name Gratin --ips,


Then run

sudo systemctl enable leapcast

Building RethinkDB on a Raspberry Pi - Part 2 posted on 13 December 2013

I previously wrote about compiling RethinkDB on a Raspberry Pi.

I merged @davidthomas426's branch in the branch next of RethinkDB (based on v1.11), and things seem to still work.
You can get this branch here: michel_arm.

Instructions to build are still the same, except that you do not need PyYaml anymore.
So far the only thing that doesn't work (and that I am aware of) is the flag --bind all. You can track the progress on this bug on this GitHub issue.

I'll be going home next week (and won't travel with my raspberry pi), so I'll try to spend some this week end to fix this --bind all flag issue.

List of packages installed with version:

[michel@pi ~]$ LIST=$(pacman -Sl); for ARG in $(pacman -Qq); do echo "$LIST" | grep " $ARG "; done
core acl 2.2.52-2 [installed]
extra apr 1.4.8-2 [installed]
extra apr-util 1.5.2-3 [installed]
core attr 2.4.47-1 [installed]
core autoconf 2.69-1 [installed]
core automake 1.14-1 [installed]
extra avahi 0.6.31-11 [installed]
core bash 4.2.045-5 [installed]
core binutils 2.23.1-3 [installed]
core bison 3.0.1-1 [installed]
extra boost 1.54.0-4 [installed]
extra boost-libs 1.54.0-4 [installed]
core bridge-utils 1.5-2 [installed]
core bzip2 1.0.6-5 [installed]
core ca-certificates 20130906-1 [installed]
extra ccache 3.1.9-1 [installed]
extra clang 3.3-1 [installed]
core cloog 0.18.0-2 [installed]
core coreutils 8.21-2 [installed]
core cracklib 2.9.0-2 [installed]
core cronie 1.4.9-5 [installed]
core cryptsetup 1.6.2-2 [installed]
core curl 7.33.0-3 [installed]
core db 5.3.28-1 [installed]
core dbus 1.6.18-1 [installed]
core device-mapper 2.02.104-1 [installed]
core dhcpcd 6.1.0-1.1 [installed]
core dialog 1:1.2_20130928-1 [installed]
core diffutils 3.3-1.1 [installed]
core dirmngr 1.1.1-1 [installed]
core dnssec-anchors 20130320-1 [installed]
core e2fsprogs 1.42.8-2 [installed]
core expat 2.1.0-3 [installed]
core fakeroot 1.20-1 [installed]
core file 5.15-1 [installed]
core filesystem 2013.05-2 [installed]
core findutils 4.4.2-5 [installed]
core flex 2.5.37-1 [installed]
core gawk 4.1.0-2 [installed]
extra gc 7.2.d-2 [installed]
core gcc 4.7.2-4 [installed]
core gcc-libs 4.7.2-4 [installed]
core gdbm 1.10-3 [installed]
core gettext [installed]
extra git [installed]
core glib2 2.38.2-1 [installed]
core glibc 2.17-5.1 [installed]
core gmp 5.1.3-2 [installed]
core gnupg 2.0.22-1 [installed]
extra gperftools 2.1-2 [installed]
core gpgme 1.4.3-1 [installed]
core gpm 1.20.7-4 [installed]
core grep 2.15-1 [installed]
core groff 1.22.2-5 [installed]
extra guile 2.0.9-1 [installed]
core gzip 1.6-1 [installed]
extra haveged 1.7.c-3 [installed]
extra htop 1.0.2-2 [installed]
core hwids 20130607-1 [installed]
core iana-etc 2.30-4 [installed]
extra icu 52.1-1 [installed]
extra ifplugd 0.28-14 [installed]
core inetutils [installed]
core iproute2 3.11.0-1 [installed]
core iptables 1.4.20-1 [installed]
core iputils 20121221-3 [installed]
core isl 0.11.1-1 [installed]
core jfsutils 1.1.15-4 [installed]
core kbd 2.0.1-1 [installed]
core keyutils 1.5.8-1 [installed]
core kmod 15-1 [installed]
core krb5 1.11.4-1 [installed]
core ldns 1.6.16-1 [installed]
core less 458-1 [installed]
core libarchive 3.1.2-4 [installed]
core libassuan 2.1.1-1 [installed]
core libcap 2.22-5 [installed]
extra libdaemon 0.14-2 [installed]
core libedit 20130601_3.1-1 [installed]
core libffi 3.0.13-4 [installed]
core libgcrypt 1.5.3-1 [installed]
core libgpg-error 1.12-1 [installed]
core libgssglue 0.4-2 [installed]
extra libidn 1.28-2 [installed]
core libksba 1.3.0-1 [installed]
core libldap 2.4.37-1 [installed]
core libltdl 2.4.2-7 [installed]
core libmpc 1.0.1-2 [installed]
core libnl 3.2.22-1 [installed]
core libpipeline 1.2.4-1 [installed]
core libsasl 2.1.26-6 [installed]
core libssh2 1.4.3-2 [installed]
core libtirpc 0.2.3-2 [installed]
core libtool 2.4.2-7 [installed]
extra libunistring 0.9.3-6 [installed]
core libusbx 1.0.17-1 [installed]
core licenses 20130203-1 [installed]
core linux-api-headers 3.10.6-1 [installed]
core linux-firmware 20131013.7d0c7a8-1 [installed]
core linux-raspberrypi 3.10.19-3 [installed]
extra llvm 3.3-1 [installed]
extra llvm-libs 3.3-1 [installed]
core logrotate 3.8.7-1 [installed]
core lvm2 2.02.104-1 [installed]
core lzo2 2.06-3 [installed]
core m4 1.4.17-1 [installed]
core make 4.0-1 [installed]
core man-db 2.6.5-1 [installed]
core man-pages 3.54-1 [installed]
core mdadm 3.3-2 [installed]
core mpfr 3.1.2.p4-1 [installed]
core nano 2.2.6-2 [installed]
core ncurses 5.9-6 [installed]
core net-tools 1.60.20130531git-1 [installed]
core netctl 1.4-2 [installed]
community nodejs 0.10.22-1 [installed]
extra ntp 4.2.6.p5-17 [installed]
core openresolv 3.5.6-1 [installed]
core openssh 6.4p1-1 [installed]
core openssl 1.0.1.e-5 [installed]
aur package-query 1.2-2 [installed]
core pacman 4.1.2-4 [installed]
core pacman-mirrorlist 20130919-1 [installed]
core pam 1.1.8-2 [installed]
core pambase 20130928-1 [installed]
core patch 2.7.1-2 [installed]
core pciutils 3.2.0-4 [installed]
core pcre 8.33-2 [installed]
core perl 5.18.1-1 [installed]
extra perl-error 0.17021-1 [installed]
core pinentry 0.8.3-1 [installed]
core pkg-config 0.28-1 [installed]
core popt 1.16-7 [installed]
extra ppl 1.0-1 [installed]
core procps-ng 3.3.8-3 [installed]
community protobuf 2.5.0-3 [installed]
core psmisc 22.20-1 [installed]
core pth 2.0.7-4 [installed]
extra python2 2.7.6-1 [installed]
extra python2-pip 1.4.1-2 [installed]
extra python2-setuptools 1.3-1 [installed]
alarm raspberrypi-firmware-bootloader 20131124-1 [installed]
alarm raspberrypi-firmware-bootloader-x 20131124-1 [installed]
alarm raspberrypi-firmware-emergency-kernel 20131124-1 [installed]
alarm raspberrypi-firmware-tools 20131124-1 [installed]
core readline 6.2.004-2 [installed]
core reiserfsprogs 3.6.24-1 [installed]
community rng-tools 4-2 [installed]
core run-parts 4.4-1 [installed]
core s-nail 14.4.5-1 [installed]
extra screen 4.0.3-15 [installed]
core sed 4.2.2-3 [installed]
extra serf 1.3.2-1 [installed]
core shadow [installed]
extra sqlite 3.8.1-2 [installed]
extra subversion 1.8.5-1 [installed]
core sudo 1.8.8-1 [installed]
core sysfsutils 2.1.0-8 [installed]
core systemd 208-2 [installed]
core systemd-sysvcompat 208-2 [installed]
core sysvinit-tools 2.88-12 [installed]
core tar 1.27.1-1 [installed]
core texinfo 5.2-2 [installed]
core tzdata 2013h-1 [installed]
extra unixodbc 2.3.2-1 [installed]
core usbutils 007-1 [installed]
core util-linux 2.24-1 [installed]
core vi 1:050325-3 [installed]
extra vim 7.4.86-1 [installed]
extra vim-runtime 7.4.86-1 [installed]
extra wget 1.14-3 [installed]
core which 2.20-6 [installed]
core wpa_actiond 1.4-2 [installed]
core wpa_supplicant 2.0-4 [installed]
core xfsprogs 3.1.11-2 [installed]
core xz 5.0.5-2 [installed]
extra yajl 2.0.4-2 [installed]
aur yaourt 1.3-1 [installed]
core zlib 1.2.8-3 [installed]

Building RethinkDB on a Raspberry Pi posted on 06 December 2013

It took 3 whole days, but I did build RethinkDB on my Raspberry Pi, and it's working!

All the praise should go to @davidthomas426 who submitted a pull request for RethinkDB to support ARM.

Here is what I have done to build RethinkDB on a Raspberry Pi.
A few things about these instructions:

  • There may be a few quirks as I wrote them after building RethinkDB.
  • They are slightly different than what you can find on RethinkDB's website since the branch I used was based on 1.10.

I built RethinkDB on Archlinux Arm.
Building RethinkDB requires more than 500MB of RAM, so you have to create a swap partition. I used a 2GB swap, a smaller swap may work (I would say 1GB is enough), but I haven't tried it.

Install some dependencies.

sudo pacman -S make gcc protobuf boost python2 gperftools nodejs base-devel python2-pip

Make python2 the default python.

sudo rm /bin/python
sudo ln -s /bin/python2 /usr/python

Install pyyaml.

sudo pip2 install pyyaml

Install v8 from AUR.

yaourt -S v8

Clone the source

git clone
cd rethinkdb
git checkout davidthomas426_277_arm_support

Run configure

./configure --dynamic tcmalloc_minimal

I also changed the swappiness to 10 - I'm not sure how useful it is though.

sudo sysctl vm.swappiness=10

You have to build with DEBUG=1 to avoid this bug (it will be fixed in 1.12)

make DEBUG=1

You may see warnings like note: the mangling of 'va_list' has changed in GCC 4.4. You can just ignore those.

After about 3 days, you can start RethinkDB with

./rethinkdb -c 1 -no-direct-io

If you are looking for the binary, it's available here

I will try to spend some time creating a branch based on next that supports ARM and build again.

Microsoft sculpt, first impressions posted on 30 November 2013

I just got the Microsoft Keyboard Sculpt. Here are my first impressions/thoughts.

First, it works well with Linux, at least Archlinux. It starts working as soon as I plugged the dongle in my computer. It also works with Windows 8, but I guess we all knew that.

Compared to the Natural Ergonomic Keyboard 4000 (that I use at work): - The sculpt looks way better. - It's less noisy that the Natural Ergonomic 4000. - As a programmer that almost never use the numeric pad, being able to move it on the side is really cool. - It's light, compact, thin and easy to carry around (the magnets are strong enough to hold the plastic riser when the keyboard is lifted.

I read reviews that the fn keys were hard to press. Well, they are indeed smaller, but they are not hard to hit at all.

For the rest, it seems as confortable as the 4000 - but I haven't spent enough time with it yet.

Reinstalling Windows 8 with an OEM key posted on 29 November 2013

I spent an afternoon last week end to install Windows 8 and here are the issues I ran into:

  • There is no easy way to retrieve your Windows 8 key when it comes pre-installed on your computer (no sticker and no official application - only third party program like Belarc Advisor).
  • People with an OEM key cannot directly download an ISO of Windows. They will get this error: "This product key cannot be used to install a retail version of windows 8". I had to spend more than one hour on the phone to get mine converted to a retail key so I could download the ISO.
  • You cannot download Windows 8.1 with a Windows 8 key. You will have to do the update later (which consists in downloading another 4GB ISO).

So the good news is: if you have an OEM key, you can download an official Windows 8 ISO. You will just have to go through pain and blood :)

Overall, from what I have seen so far, the new interface of Windows 8.1 for phones and tablets is really great, even better than Android's and iOS' ones. On desktops, the experience doesn't seem to be as smooth as on tablets. I ran into a few annoying things, but I will have to spend a little more time using it to make up my mind.

Python driver for RethinkDB with protobuf 2.5 posted on 27 October 2013

The python driver for RethinkDB has two implementations of the Google protobuf protocol. One is written in pure python and one in C++. The C++ back-end is really faster (See the 1.7 release post).

The protobuf files provided in the pip package are compiled with protobuf 2.4 (version currently supported by Ubuntu). So if you are using a more recent version of protobuf (like if you are on Arch), you will see this error

In file included from ./rethinkdb/
./rethinkdb/ql2.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
#error This file was generated by an older version of protoc which is
./rethinkdb/ql2.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
#error incompatible with your Protocol Buffer headers.  Please
./rethinkdb/ql2.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
#error regenerate this file with a newer version of protoc.
*** WARNING: Unable to compile the C++ extension
command 'gcc' failed with exit status 1
*** WARNING: Defaulting to the python implementation

You can create a package with the appropriate protobuf file if you clone rethinkdb, but if you are lazy, here is the package with the protobuf 2.5 files.

Install it with

sudo pip install rethinkdb-1.10.0-0-protobuf-2.5.tar.gz

It it worked fine, this command should print "cpp"

python -c "import rethinkdb as r; print r.protobuf_implementation"

The Language Instinct - How the Mind Creates Language posted on 21 September 2013

I just finished reading The Language Instinct: How the Mind Creates Language by Steven Pinker.

It is a fascinating book that presents some quite interesting theories and facts about language that I was not aware of - I for example, never thought of language as a human instinct,

Being a non native English speaker, some examples were a little tricky to get, but overall the book is easy to read, full of examples, and just too engaging - I read the whole book during my flight from San Francisco to Hong Kong.

If you are curious about things and want to read something interesting, give it a try, you will not regret it.

Roco and Redcarpet posted on 12 September 2013

I recently ran into an issue with the rocco gem using redcarpet. The error was:

/var/lib/gems/1.9.1/gems/rocco-0.8.2/lib/rocco.rb:447:in `process_markdown`: uninitialized constant Rocco::Markdown (NameError)

One way to fix it is to replace line 36 of /var/lib/gems/1.9.1/gems/rocco-0.8.2/lib/rocco.rb with

libs = %w[redcarpet/compat rdiscount bluecloth]

Basically redcarpet was updated and rocco did not change its code to match the new redcarpet.

Thinky v0.2.15 - But what is it? posted on 08 September 2013

I just updated thinky with dates and just thought that it would be nice to jump on the occasion to talk a little about this project.

What is it?

Thinky is a JavaScript ORM for RethinkDB.

Other JavaScript ORM

As far as I know there is just one other JavaScript ORM for RethinkDB, which is a plugin for jugglingdb named jugglingdb-rethink. The goal of jugglingdb is to provide an ORM with one syntax that would work on many databases, including MySQL, Redis, MongoDB etc.

I tend to think that ReQL has a really great API - but that may be my biased point of view. Anyway, I would rather not use an API that uses JSON to represent relations like in or logic operators like and. Thinky aims to provide an API as nice as the official Node driver.

So what is in the box?

Thinky aims to stick close to the official node driver. A few differences are:

  • You can set default values when creating an object (including functions that would be called when the object is being created), so if you want to store the date at which an object is created, you can use this schema

    var Cat = thinky.createModel('Cat',
            name: String,
            createdAt: {_type: Date, default: function() { return new Date() }}
    var kitty = new Cat({
        name: "Kitty"
    // kitty will have a field `createdAt`.
  • You can pass the callback directly to the last method instead of using the run command. So these two syntax are the same

    Cat.get('3851d8b4-5358-43f2-ba23-f4d481358901', callback);
  • You can define methods on your schemas, making your code look more like a object oriented program.

    var Cat = thinky.createModel('Cat', { name: String }); 
    Cat.define('sayHello', function() { console.log("Hello, I'm " });
    var kitty = new Cat({ name: "Kitty" });
  • Because you define schemas and relations, doing a JOIN operation is as simple as calling the getJoin method.

    Cat = thinky.createModel('Cat', {id: String, name: String});
    Task = thinky.createModel('Task', {id: String, task: String, catId: String});
    Cat.hasMany(Task, 'tasks', {leftKey: 'id', rightKey: 'catId'});
    Cat.get( 'b7588193-7fb7-42da-8ee3-897392df3738').getJoin( function(err, result) {
        // Returns a cat with its tasks

Note: Some limitations exists about getJoin now and should be solved once the proposal described here will be implemented.

Awesome, how do I get it?

There is an npm package, just run

npm install thinky

And you should be all set.

Nice, how can I help?

Any bug reports, suggestions, pull requests (code or docs) are more than welcome!

Moving my blog to Github posted on 06 September 2013

I have decided to move my blog from my personal server to Github.

A few reasons behind this move

  • I have a hard time writing things without vim
  • Pushing on Github has become easier than using Wordpress
  • Upating Wordpress is kind of annoying
  • The syntax highlighting works
  • Posts should be cleaner
  • I can use my server for other purposes
  • Etc.

It is basically just going to make my life easier.

I was also bored of the previous design, so I just made a new one (even though I am not sure we can call this a "design")

While moving some old posts, I have also found a few interesting things that I have not published yet and may release:

  • A look at my studies - and how useful (or not) they are
  • A system to do distributed computating in browsers

Auto expand textarea with AngularJS posted on 14 August 2013

While working on Chateau (data explorer for RethinkDB), I had to write a directive to auto expand a textarea to avoid scrollbars. Since it may be useful for some people, here is the code:

The HTML code

<textarea ng-auto-expand>What ever content you need</textarea>

The CSS code

    overflow: hidden;
    padding: 4px;

The directive

angular.module('chateau.directives', [])
    .directive('ngAutoExpand', function() {
        return {
            restrict: 'A',
            link: function( $scope, elem, attrs) {
                elem.bind('keyup', function($event) {
                    var element = $;

                    var height = $(element)[0].scrollHeight;

                    // 8 is for the padding
                    if (height < 20) {
                        height = 28;

                // Expand the textarea as soon as it is added to the DOM
                setTimeout( function() {
                    var element = elem;

                    var height = $(element)[0].scrollHeight;

                    // 8 is for the padding
                    if (height < 20) {
                        height = 28;
                }, 0)

Chateau - Few screenshots posted on 11 August 2013

Quick post about Chateau because I have way too many things to finish this week end -- and too many things that I won't have time to do :'(.

You can browser you databases and tables:
See screenshot

Basic operations are available on databases and tables. You can create/delete them. Chateau relies only on the JavaScript driver now, so renaming a table is no more available for the moment.
See screenshot

Documents are displayed in a table similar to the RethinkDB native web interface.
See screenshot

Hovering on a document shows some actions available on the left:
See screenshot

You can delete a document just by clicking on the trash icon and confirming the deletion.
See screenshot

You can also update the document.
See screenshot

Eventually you can add a new document. Because RethinkDB doesn't enforce any schema, Chateau is going to sample your table and create a schema based on the documents it retrieves. So you probably should just have to fill the fields and not create one.
See screenshot

More features are coming. Feedback and pull requests are are appreciated : )

Blog example with Thinky posted on 30 July 2013

I have recently been working on thinky (a JavaScript ORM for RethinkDB), and I just finished adding a new example on how to use thinky.

This example is a blog built with Node, Thinky, Express and AngularJS. The purpose of this example is to illustrate how to use Thinky so I will not spend time writing about AngularJS or Express. If people are interested, I wouldn't mind writing about it, but I added some comments in the code, so understanding how the stack works shouldn't be too hard (especially since these frameworks are quite user-friendly).

There are two files that use Thinky:


In this file, we are just loading the module with require() and call .init() which will create a pool of connections that will be used to make queries.

    port: config.port,
    db: config.db


This file contains all the interesting things about thinky.


We first create models with thinky,createModel()

var Post = thinky.createModel('Post', {
    id: String,
    title: String,
    text: String,
    authorId: String,
    date: {_type: Number, default: function() { return }}
var Author = thinky.createModel('Author', {
    id: String,
    name: String,
    email: String,
    website: String
var Comment = thinky.createModel('Comment', {
    id: String,
    name: String,
    comment: String,
    postId: String,
    date: {_type: Number, default: function() { return }}
  • The first argument is the name of the model (which is also the name of the table).
  • The second argument is the schema. A schema is just an object where fields map to a type (String, Number, Boolean etc...) or to an object with a _type field. You can also pass options in the latter case like a default value.

One really nice thing about RethinkDB is that even if it is a NoSQL database, it lets you do efficient JOINs between table. In our case, we want to create two relations:

  • One post has one author. This join is performed on the condition post.authorId == We would like to store the author in a field named author so the syntax will be:
Post.hasOne( Author, 'author', {leftKey: 'authorId', rightKey: 'id'})
  • One post can have multiple comments. This join is performed on the condition == comment.postId. We would like to store the joined comments in the field comments so the syntax will be:
Post.hasMany( Comment, 'comments', {leftKey: 'id', rightKey: 'postId'}, {orderBy: 'date'})

The last object with the field orderBy are options of the JOIN operation. In this case, we want the comments to be ordered by their date.

Now that we have set all our models, let's look at how we use our models to make queries:

Basic operations

  • To retrieve a single post, the syntax is pretty close to the ReQL one:
Post.get(id).run(function(error_post, post) { ... })

In ReQL the query would be

    .run(connection, function(error_post, post) { ... })

The main difference here is that you do not need to deal with connection when using Thinky. Thinky will take care of maintaining a pool of connections.

  • Like in ReQL, you can chain commands with Thinky. While chaining is really nice, you may sometimes want to be able to execute a query without using .run(). In this case you can just pass an extra callback to any function. For example, this query
Author.get(id, function(error, author) { ... })

is the same as

Author.get(id).run(function(error, author) { ... })
  • Saving an object in the database is as simple as calling .save() on an instance of a model.
var newPost = new Post(req.body);, result) { .. })

The ReQL query being:

    .run(connection, function(error, result) { ...})
  • In a similar way, you can update an object by calling .update().
var newPost = new Post(req.body);
newPost.update( function(error, post) { ... })

The equivalent ReQL query would be

    .run(connection, function(error, result) { ...})

Note: If you call save() twice on an object created with new, it will use insert() the first time, and update() the second time. The reason why Thinky does not use upsert by default is because I believe it could lead to undesired deletions/updates.

  • If you want to delete the document with a certain id, you will have to select it with .get() first then call .delete() on it.

    Post.get(id).delete( function(error, result) { ... })

    Which is in ReQL:

        .run(connection, function(error, result) { ... })

    Once nice thing with ReQL is that the deletion we did was done in only one query. Another way to do it would be to fetch the document with <codeget()then calldelete()` on the document, but that would fire two ReQL queries.

Cool things

  • Let's now look at the nice things that Thinky provides. The first query you can read in the api.js file retrieves all the posts, orders them by date in a descending order and retrieves all the joined documents (that is to say the author and the comments).

    This is how you would do it with Thinky:

    Post.orderBy('-date').getJoin().run(function(error, posts) { ... })

    The equivalent ReQL query is:

        .map( function(post) {
            return doc.merge({
                author: r.db("blog").table("Author").get(post("authorId"))
                comments: r.db("blog").table("Comment")
                .getAll( doc("id"), {index: "postId"})
        }).run( connection, function(error, posts) { ... } )

    Note 1: that everything happens in the database. Thinky does not process data.

  • Note 2: Thinky currently does not use the ReQL eqJoin command because it behaves like an inner join (see this github issue)

  • You can also retrieve joined documents for only one element. You just have to call getJoin() like before:

    Post.get(id).getJoin().run(function(error, post) { ... })

The other queries in routes/api.js are similar to the previous ones but are done on other tables.

Thinky is a new library. If you have any feedback or suggestions, I would love to read them!

Playing with D3js and RethinkDB posted on 17 June 2013

Here is my last week end project.

This project has three purposes/reasons:

  • Kill time
  • Play with RethinkDB's python driver
  • Draw cool stuff with D3.js

The project is about drawing a graph of similar movies. The users can expand the graph by clicking on a node. Because I am limited to 10 requests per second, I though it wouldn't be stupid to cache things. Instead of caching things in memory with a home-made memcached, I though it would funnier to use RethinkDB.

So here is what happens every time a user ask for some data

  • We check if we have the data in the database
  • If we do, we just return the results
  • If we don't, we fetch some data from rottentomatoes, dump it in RethinkDB and send it back to the user.

The main components of the used stack are Flask, RethinkDB and D3.js. There are some secondaries components too like Cherrypy, Bootstrap and Jquery.

I would note two things from this project

  • While the force layout always converge to a "good" representation, it is quite unstable during the first ticks. I tried to tweak a little the algorithm to make it less oscillating at the begining, but could not do it to a certain point.
  • It's pretty cool to write queries like
movie = r.table("movie").get(id_movie).do( lambda movie:
        # If we didn't find the movie or didn't find the similar movies
        (movie == None) | (~movie.has_fields("similar_movies_id")), 
        # We just return the movie/None
        # Else we add a field similar_movies with the relevant data
            "similar_movies": movie["similar_movies_id"].map(lambda similar_movie_id:
    )).run( g.rdb_conn )

I have started to implement the search feature, but I'll leave it as it is I think. I have other cool idea in my mind :-)

Pancakes on Mt. Tamalpais posted on 16 June 2013

West Inn does some pancakes a few times per year -- Checkout

I went there with a road bike and even if there is a little part of dirt, it's totally doable. I went there with some 23mm tires. From San Francisco, it's about 3600 feet in elevation in total. You can also just drive there then hike a little.

It's really nice to get pancakes there, you have an awesome view (you can see SF in the clouds, the bay and the ocean), and after a little effort, it tastes even better. Adding some company like today makes it better too.

I'll go for pancakes next time for sure :-)

Today I biked to work posted on 09 April 2013

And I'm pretty happy about that.

Note: For those who don't know, I'm living in San Francisco and working in Mountain View at RethinkDB, which is about 45 miles from my place.

It was my first time biking to work, and yesterday I was hesitating a bit because my legs were still a little tired from my trip on Sunday, but I do believe that I should enjoy life as much as I can so when I'll be old, I would say "I had a good life, and I'm happy with what I did". So in the end I just went for it, and I have to admit, it was awesome!

The first part about leaving San Francisco isn't really nice, there are too many cars, but the rest is quite good, it was awesome to bike next to the bay with a nice Sun. I'll definitely to do it again, and as soon as I will know what is the maximum pace I can sustain, I'll join a group at sf2g.

TL;DR: I'm happy, I biked to work and it was awesome.

Quickstart with ReQL posted on 05 April 2013

This (not so quick) start will rely on JavaScript. If you want to use Python or Ruby, don't look at the parts that deal with a callback.

This article can be splitted in 4 parts

  1. Connection and meta queries
  2. Usual operations
  3. Joins
  4. Map-reduce

Connection and meta queries

Start your server with

$ rethinkdb

Start node.js. The code starts with a require

var r = require('rethinkdb');

All the functions we are going to use, are methods of the variable r.

Then connect to the server. if you haven't started the server on your local machine or have defined used the flag --port-driver or --port-offset, update the port value (Look for the line info: Listening for client driver connections on port)

var connection = null;
r.connect( {host: 'localhost', port: 28015} , function(err, conn) {
    if (err) throw err;
    connection = conn;

The callback defined in the JavaScript driver use the node convention function(err, result) {}

This is how you list the databases you have

r.dbList().run( connection, function(err, result) {
    if (err) throw err;

r.dbList() is the query that is going to be built by the JavaScript driver. When you are going to call .run() on it, the query is going to be compiled in a binary array (using google protobuf library) then send to the server. When the driver get the response back, it executes the callback.

If you just started RethinkDB, you should see a database test.

Now let's create a database blog.

r.dbCreate("blog").run( connection, function(err, result) { ... })

You can check that it was created with .dbList() Now create a table users.

    .run( connection, function(err, result) { ... } )

The query starts with r, then we "select" the database blog and then we create a table users. If you log the results, you should see { created: 1}

You can verify again that the table was created with .tableList()

    .run( connection, function(err, result) { ... } )

Again, we select the database blog first and only then list the tables.

Usual operations

Let's insert some data now in the database users

    .insert( {
        name: "Michel",
        age: 26,
        karma: 108,
        gender: "male"
    }).run( connection, function(err, result) { ... } )

The callback should log an object like this

    "inserted": 1,

When we created a table with .tableCreate(), we just passed the name of the table, so RethinkDB pick the default name for the primary key which is id.

The object we inserted didn't have a value for id, so RethinkDB just generated a random value for us 3c05a2cc-de49-463c-9a7e-abeeef7f9568.

Let's retrieve our data.

r.db("blog").table("users").run( connection, function(err, cursor) {
    cursor.toArray( function(err, result) {

Wait, what is this double nested callbacks?

r.db("blog").table("users") is going to return all the documents in the table users with a stream (that can be huge). That's why the callback in run() will provide a cursor.

Here we call toArray( callback ) that is going to convert the cursor to an array and call our callback with it. A RethinkDB cursor comes with other useful methods like .each(), hasNext() and next(). There is an issue on github to make the API more consistent.

Let's insert a little more data before going further.

        {name: "Laurent", age: 28, karma: 42, gender: "male" },
        {name: "Sophie", age: 22, karma: 57, gender; "female" },
        {name: "Aurelie", age: 18, karma: 90, gender: "female" },
        {name: "Dan", age: 15, karma: 31, gender: "mal" },
    ]).run( connection, function(err, result) { ... } )

To extract all users that are more than 21, we can use a filter that way:

    .run( connection, function(err, cursor) { ... } )

The method filter() take a predicate as argument. Here, r.row select the current row being accessed while ("age") will select the attribute age. In Python or Ruby, the equivalent syntax is r.row["age"]. JavaScript doesn't allow to overwrite the square brackets operator, so it has been replaced with parentheses. Once we have the age of the user, we compare it with ge() which stands for greater or equal.

That is one way to do thing. A real cool thing about RethinkDB is that you can pass anonymous functions (also known as lambda functions). Here is the same query with an anonymous function.

    .filter( function(user) {
        return user("age").ge(21)
    }).run( connection, function(err, cursor) { ... } )

If you want to update all the users that are less than 21 with a boolean value { can_drink: false}, you have first to select your data with a filter() and then to update your data with update(). Here is how it is done

    .update( { can_drink: false } )
    .run( connection, function(err, result) { ... } )

RethinkDB provides a way to completely replace a document with the method replace. That's also how you remove an attribute. Let's remove the attribute can_drink from our users who are more than 18. The philosophy stay the same. We first select our users, then replace them.

    .run( connection, function(err, result) { ... } )

r.row.without("can_drink") returns the current element without the field can_drink.

If you want to delete all users less than 18, you have to go with the command delete().

    .run( connection, function(err, result) { ... } )

One other way to select a result is using get(). The example below is going to return one document whose primary key is "3c05a2cc-de49-463c-9a7e-abeeef7f9568". This query is more efficient than a simple filter because it will do the search with a B-tree. Right now, get() only supports primary key. It will soon be able to do the same with a secondary index (See #602 for progress).

    .run( connection, function(err, result) { ... } )

That's all for the common operations.


Let's take a look at how to do joins. Even if RethinkDB is a NoSQL database, it provides efficient joins with the .eqJoin() method.

Let's suppose that we have a table comments with the following schema

    id: <string>
    id_author: <string>
    comment: <string>

To retrieve all the comments with the name of the user, we can do a join this way:

.run( connection, function(err, result) { ... } )

We are going to take a stream of comments, and for each comment, we are going to look for users whose id match id_author of the current comment.

zip() is a going to merge the comment and the author in the same document. If you want to do a join on non-primary keys, you can use innerJoin() or outerJoin()

Map reduce

Let's now look at the cool things. RethinkDB provides an efficient way to do map/reduce. Suppose you want to know what is the sum of karma

    .reduce(function(a, b) {
        return a.add(b)
    }, 0)
    .run( connection, function(err, result) { ... } )

The method map is going to map the value of the field karma, then reduce is going to make the sum of all the value of karma.

The second parameter of reduce() is the base. A naive implementation would look like that

map    -> [108, 42, 57, 90 ]
reduce -> 0 + 108 + 42 + 57 + 90 -> 297

The method reduce() is implemented in parallel, so you are not free to specify whatever you want as a base. Suppose that the first two documents are on a server and the last two are on a second server.

What happens is going to be something like that

map    -> [    108, 42,           57, 90      ]
           data on server 1 - data on server 2

reduce ->  Server 1 
           0+108 ->1108
           108+42 -> 150
           Server 2
           0+57 -> 57
           57+90  -> 147
           150+147 -> 297

So if you want to retrieve the sum of the karma plus 10, you can not pass 10 as a base. You have to add 10 after the reduce. The base has to be neutral.

RethinkDB also provide a groupedMapReduce method that is going to group rows, do a map, and eventually reduce the results.

Here is how to compute the sum of the karma of the two genders.

    r.row("gender"), // group by gender
    r.row("karma"), // map the value of karma
    function(a, b) { return a.add(b) }, // sum the karma
).run( connection, function(err, result) { ... } )

And that's all for the quick start. This was a quick glance at how to do common operations and some little fancy (yet useful) things like join and map-reduce. I didn't talk about everything, like how to use a default database etc. If you want to know more about Rql, take a look at the API docs.

Chateau - A data explorer for RethinkDB posted on 04 March 2013

A little story

Two weeks ago I started to bootstrap a new idea. At some point I had to write an administration interface to manage my data. That was just a huge pain. So I decided to write one that I could easily reuse and that is how Chateau was born.

One of the reason why people love frameworks like Django is because you just have to create the schema of your table, and Django provides you with a nice http interface to add/edit/delete rows. So I just thought about doing something similar with RethinkDB with two small differences.

  • The web has tons of different stacks so I wrote Chateau to be independent of your stack. It is a standalone node.js server. Another reason to have an standalone server is because RethinkDB doesn't provide authentification (see issue) so people may want to take down their admin interface when they are done bootstrapping.
  • The last notable difference is that when bootstrapping a project, your data's schema might change and you probably do not want to change a config file every time, so I made Chateau to infer your schema itself.

What's in the box?

Well sorry, there is not really a box, you have to build from source. You need to install node and the following libraries: express, coffee-script, handlebars, stylus, rethinkdb. I will do some packaging later if people are interested in.

So what's in the source?

The alpha version comes with these features: - It list all your databases and tables and let you do some operations on it (creating and deleting). Renaming a database or a table can not be done with the driver (RethnkDB 1.3.2), so I will wait for 1.4 to be released instead of hacking something with the http api. - It shows 1000 documents in a table (pagination is not yet available). The first column is the primary key and the following one are ordered (the most shared attribute is first). - You can add a document. Chateau will infer the schema and ask you just to fill some inputs. If you disagree with the proposed schema, changing it is just about toggling a select tag. - You can update/delete a document.

Cool, what's next?

Can be done anytime depending on how much time I have:

  • Fix some css edges cases
  • Do the main TODOs in the code
  • Refactor the code for add/update a document
  • Implement/test all errors handling
  • Improve Makefile
  • Add loading for logout
  • Change the icon
  • Screenshots/screencast

Waiting for RethinkDB 1.4 (hopefully this week)

  • Update Chateau to work with the new API of the JavaScript driver
  • Sort the documents
  • Filter documents

Advanced features that I am interested in implementing because they are tricky and no one did it before.

  • Support joins
  • Add custom actions
  • Add custom views

Things that may never be done, but who knows?

  • Handle multiple RethinkDB instances at the same time
  • Support https

If you are interested to join/want to provide some feedback, you are welcome! Ping me at [email protected], neumino on, @neumino on Twitter.

Archlinux - Wacom intuos 5 with bluetooth posted on 20 February 2013

I just received yesterday my tablet, a Wacom Intuos 5 medium. Before buying it, I went through the web and couldn't really figure out if the bluetooth was working or not. I eventually did bet that it was working and ordered a wireless kit with my tablet, and the good news is: It works!

Installing the driver is quite easy.

pacman -S xf86-input-wacom

At that time your tablet works with the cable or bluetooth, but the buttons are not mapped if connected with bluetooth.

Mapping your buttons is a little more tricky. Gnome 3.6 doesn't work for me, so I just ended up using the xsetwacom command, by first finding the button's id with xev like explained on Arch's wiki. The tricky thing is when you use the bluetooth. For some obscure reasons xev doesn't catch the event, but that's just xev failing. Find the ids of your buttons using the cable, then just use xsetwacom and that works. For me the buttons on my pad are (from top to bottom): 2, 3, 8, 9, 1, 10, 11, 12 and 13.

The command looks something like that:

xsetwacom --set "Wacom Wireless Receiver Pen pad" Button 1 "key ctrl z"

This settings are not persistent. I quickly tried to use a xorg conf file, but I just ended up filling my .profile with the commands.

Everything is set now and you can just draw :)

Installing Archlinux on a Lenovo X1 Carbon posted on 14 February 2013

I just finished installing/setting up everything on my X1 Carbon, and I have to say, it's just beautiful.

Installing Archlinux

Installing Archlinux on the X1 carbon is pretty straight forward. The X1 Carbon is quite linux-friendly (well, first there is no NVIDIA card).

  • Partition your disk with parted (cfdisk doesn't work on the X1 carbon). I'm using grub, so I had to use the command "set X boot_grub on" where X is the /boot partition
  • Format partition with mkfs.ext2 and mkfs.ext4
  • Mount /
  • Check your connection. Using the dongle that Lenovo provides works out of the box.
  • Set up pacman with

    mkdir -pm /mnt/var/lib/pacman
    pacman -r /mnt -Sy base</pre></li>
  • Sign the keys

    rsync -rav /etc/pacman.d/gnupg/ /mnt/etc/pacman.d/gnupg/
  • Chroot your environment

    mount --bind /dev /mnt/dev
    mount --bind /sys /mnt/sys
    mount --bind /proc /mnt/proc
    chroot /mnt /bin/bash
  • Edit/fill /etc/fstab

  • Setup your hostname, locale, timezone

  • Build your initial ram disk

    mkinitcpio -p linux
  • Install grub

  • grub-install --boot-directory=/mnt/boot /dev/sda grub-mkconfig -o /mnt/boot/grub/grub.cfg Add an entry in your grub.cfg file.

menuentry "Archlinux" {
set root=(hd0,X)
linux /boot/vmlinuz-linux root=/dev/sdaX
initrd /boot/initramfs-linux.img
  • Reboot and here you go.

Configuring Archlinux

Now your system should boot. I'm providing some tips to configure your X1. I'm using Gnome (because it's awesome), so if you are not, some tips may not apply for you.

  • The ethernet interface has a strange name (enp0s26u1u2 for me). Run this command to set up your dongle the first time so you can install a network manager..

    ls /sys/class/net
    ip link set XXX up
  • It may not be related to the X1 Carbon, but I had some issues for switching from a US keyboard to a US International with dead keys keyboard. You can see my monologue on Arch's forums. I end up adding a personal shortcut calling this script

    (setxkbmap -query | grep "variant:\s\+intl") && (setxkbmap -model thinkpad -layout 'us') || (setxkbmap -model thinkpad -layout 'us' -variant 'intl')
  • The brightness works on the Gnome settings panel, but my keys did not. After some investigation, Gnome uses the wrong values to set the brighness. I was going to stip out the binary from this Gnome extension and bind my FN-keys to it when I realize that the extension could do it for me.

  • If you use cpufreq, you'll see your cpu scales from 800Mghz to 1.8 Ghz without going higher (on the governor ondemand). If you use i7z you'll see that the Turbo boost works out of the box and that it's just cpufreq that reports the wrong number.

  • To disable the touchpad while typing, I'm using this command (just put it in .profile). syndaemon -d -k -i 0.6s

  • I tried to set up the fingerprint reader (hardware: Upek 147e2020) and with some pain, I had it working for sudo but not for gdm. I used fingerprint-gui, and from this archive and updated /usr/lib/udev/rules.d/40-libbsapi.rules with the good id. See this page for more details. Fingerprint-gui broke my Gnome (v 3.6) and I didn't push further.

Random thoughts

The reason why I bought a X1 Carbon when I have a macbook retina at home is because I found the user experience awful. Everything is not bad on OS X, but there were too many little things that I find annoying, here are some of them:

  • Waking up from sleep can take up to 15 seconds.
  • When switching virtual desktop, I have to wait for the animation to finish before being able to do something (like opening spotlight to launch an application).
  • There is still no native way to manipulate your windows like you can on Windows, Unity or Gnome. And the third parties app I tried were not as good as Gnome.
  • I cannot move the first desktop.
  • Installing/uninstalling things is pure black magic for me. Maybe that's why people are using homebrew now.

I used to have an iBook years ago but since I broke it, I switched to a netbook+desktop.

I have good memories of OS X from that time, but since I tried Gnome 3, OS X now just look completely broken.

A macbook is still a nice piece of hardware, I liked the fact that Apple pushed a high definition screen for a laptop, but OS X was too much of a pain for. The aluminium case is nice, but it has two problems

  • It's really hot. I cannot let my fingers on the "t" and "y" keys when I'm doing heavy tasks.
  • It's really cold when I start using it.

Well anyway, I'll just give my macbook retina to my brother and use my X1 Carbon. I had a great time setting up everything, and it should be the same for using it :)

Building RethinkDB from source on Archlinux posted on 09 February 2013

I just received a Lenovo X1 Carbon and install Archlinux on it.

Since I had a fresh install, I tried to build RethinkDB from source to see what were the dependencies. If you just want to use RethinkDB, you can use the AUR package.

You will need to install some packages.

sudo pacman -S make gcc boost-libs protobuf boost python2 v8 libunwind gperftools java-runtime nodejs
yaourt -S ruby_protobuf
sudo npm install -g coffee-script
sudo npm install -g less
sudo npm install -g handlebars

Some libraries are not properly linked when using Archlinux. You need to add these two symbolic links or grep the files that use libprotobuf.a and libprotoc.a and do the appropriates changes.

sudo ln -s /usr/lib/ /usr/lib/libprotobuf.a
sudo ln -s /usr/lib/ /usr/lib/libprotoc.a

RethinkDB use python 2 to run some scripts but uses python to refer to python 2. Here is a quick and bad solution.

sudo ln -s /usr/bin/python2 /usr/bin/python

If you are not on a fresh install, python needs to refer to python 3. For the time being, you can run a find+sed command to replace all references to python by python2. That's what the PKGBUILD is doing by the way.

find ./ -type f -exec sed -i '1 s/python$/python2/' {} +

And that's all you need to build RethinkDB from source.

Case study with RethinkDB Muni's predictions posted on 22 January 2013

Note: RethinkDB has changed the API for the JavaScript driver and these queries are outdated.

Muni stands for San Francisco Municipal Transportation Agency and is the agency running all buses in San Francisco. I'm taking the bus every weekday to go to/come back from the Caltrain station.

Muni provides an api to get at what time the next bus will arrive. I have noticed that the predictions are not always accurate and sometimes completely off. So I thought it would be nice to quantify these inaccuracies and play with ReQL, RethinkDB's query language.

Note: I am working at RethinkDB but this article is just the result of me playing around during a week end.

First let's retrieve some data and store it in the database. I have written a script that pulls data from Muni's API every 20 seconds and store it in the database. It is written in CoffeeScript and use node.js with the http and xml2js libraries. You can read the raw file or clone the github repository

Pulling data for all the stops of the line 48 during one day (on a Thursday) filled my table with 500.000 entries.

Now come the interesting part. I could retrieve all the data in the table and make some operations on top of it using coffee-script/python/ruby, but that would be cheating. RethinkDB aims to be able to run long analytics queries, so I gave it a try.

The documents I have stored in my database have the following format.

"stop_tag": 3512, // stop's id
"next_bus_sec": 1148, // The next bus is in 1148 seconds
"next_bus_min": 19, // The next bus is in 19 minutes
"vehicle": 8430, // The id of the next vehicle coming in 19 minutes
"time": 1356935060062, // The time when the prediction was made
"id": "0003ac31-e903-4ec1-9184-3dfee272fd48" // id of the document

If I order my rows by "time" and map the value of nextbusmin. I would get something like that

[7, 6, 5, 4, 3, 2, 1, 0, 10, 9, 8, 8, 7, 6, 5, 5, 4, 3, 3, 2, 1, 0, ...]
| |
| The next bus is in 2 minutes
The next bus is in 7 minutes

And here is what I want to retrieve is for each prediction: the prediction minus how long the user did wait.

How long the user waited
[7, 6, 5, 4, 3, 2, 1, 0, 10, 9, 8, 8, 7, 6, 5, 5, 4, 3, 3, 2, 1, 0, ...]
| |
| Next bus in 0 minute = The bus is here
Next bus will be in 7 minutes

To build the query, I used the data explorer because it provides a nice interface to test my query. I don't have an extra table with a list of stops available, so I first retrieved all the stops using pluck() and distinct()


Now for each stops, I will retrieve the list of predictions we have. This can be done with an inner join query and a groupBy. I just went for a simple map. One think about ReQL is that implicit variable (r.row) cannot be used in case of nested queries, so I have to use anonymous functions (also known as lambda functions in Python) for the next steps.

.map( function(stop) { return {
stop_tag: stop('stop_tag'),
predictions: r.db('muni').table('predictions').filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))

Now that I have for each stop all the predictions, I would like to know how accurate the prediction is. I will consider that the time when the bus arrive is the next prediction that show the next bus in 0 minute. So I have to join the inner query with itself. To avoid computing this stream 1+n time (which can be expensive because I use a sort every time), I'm going to use r.let() and store it in memory.

The syntax for r.let() now is

r.let( { "key": <object Obj we want to store> }, <query that uses Obj>).run()

Note: The syntax for r.let() will change on the next release (1.4).

The previous query with r.let() looks like that:

.map( function(stop) {
return {
stop_tag: stop('stop_tag'),
predictions: r.db('muni').table('predictions')
.filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))

Now let's cross the stream with itself and get for each prediction all the times when a bus arrived.

.map( function(stop) {
return {
stop_tag: stop('stop_tag'),
predictions: r.db('muni').table('predictions')
.filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))
r.letVar('predictions').map( function(prediction) {
return r.letVar('predictions').filter(function(next_prediction) {
return next_prediction('next_bus_min').eq(0)

Now that I have all the data I need, I just have to do some math to get the error. So among all the prediction that we retrieve in the inner query, we just need the first one (the next time the bus arrive). Using nth(0) could break in case a bus never comes (calling .nth(0) on an empty array). A solution to make sure that nth(0) is not going to break is to use r.branch (which is the syntax for "if"). I was lazy so I just used a trick adding an element that is going to yield a zero error. Then we can safely compute the error.

.map( function(stop) {
return {
stop_tag: stop('stop_tag'),
predictions: r.db('muni').table('predictions')
.filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))
r.letVar('predictions').map( function(prediction) {
return r.letVar('predictions').filter(function(next_prediction) {
return next_prediction('next_bus_min').eq(0)
.union([{time: prediction('time').add(prediction('next_bus_min').mul(60*1000))}])

So the error now looks like this:

/* more... */

The following query compute the norm 1 error.

.map( function(stop) {
return {
stop_tag: stop('stop_tag'),
prediction_error: r.let(
predictions: r.db('muni').table('predictions')
.filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))
r.letVar('predictions').map( function(prediction) {
return r.letVar('predictions').filter(function(next_prediction) {
return next_prediction('next_bus_min').eq(0)
.union([{time: prediction('time').add(prediction('next_bus_min').mul(60*1000))}])
.map(function(error) { return r.branch(, error.mul(-1), error) })
.reduce(0, function(acc, val) { return acc.add(val)})

Since I am looking at the data in term of user experience, I am also (and mostly) interested in the worst case. So here is how I got the maximum error:

.map( function(stop) {
return {
stop_tag: stop('stop_tag'),
prediction_error: r.let(
predictions: r.db('muni').table('predictions')
.filter( function(prediction) {
return prediction('stop_tag').eq(stop('stop_tag'))
r.letVar('predictions').map( function(prediction) {
return r.letVar('predictions').filter(function(next_prediction) {
return next_prediction('next_bus_min').eq(0)
.union([{time: prediction('time').add(prediction('next_bus_min').mul(60*1000))}])
.reduce(0, function(acc, val) {
return r.branch(, val, acc)

The results look like that:

"stop_tag": 3248,
"prediction_error": 35.01116666666667
"stop_tag": 3249,
"prediction_error": 6.006866666666667
"stop_tag": 3250,
"prediction_error": 35.333483333333334
"stop_tag": 3251,
"prediction_error": 35.33631666666666
"stop_tag": 3252,
"prediction_error": 6.002116666666666
"stop_tag": 3304,
"prediction_error": 5.670216666666667
"stop_tag": 3305,
"prediction_error": 34.669200000000004
"stop_tag": 3306,
"prediction_error": 6.00385
"stop_tag": 3411,
"prediction_error": 87.99973333333334
"stop_tag": 3424,
"prediction_error": 16.666383333333332
"stop_tag": 3432,
"prediction_error": 87.0006

So Muni's prediction are quite aweful. I've tried to check for predictions with a bus coming at 1 and 0 minutes, but that didn't really change the results. I have seen this thing happening during the evening and from my personal experience, it is because the system that makes the predictions does not know when a bus is going to the garage until it does go there.

Back to the technical part, it was pretty cool to use the data explorer to build an analytic query without having to write a script myself. The javascript driver doesn't always return user-friendly errors which is not really cool, but that should be fix on the next release with the new protobuf specs.