Proxy pass with nginx redirects to localhost posted on 08 July 2017

I have multiple nginx instances running on the same server. The master instance redirects traffic according to the request while the slaves one serve a specific website. The main reason for this weird architecture is that I’m running multiple services each in their own docker instance, and that I want to be able to update a specific service without disturbing others.

The master nginx configuration file was:

server {
  listen 80;
  listen [::]:80;
  server_name example.com *.example.com;
  location / {
    proxy_pass http://127.0.0.1:4001;
  }
}

While the slave’s one was:

server {
  listen 80 default_server;
  listen [::]:80 default_server ipv6only=on;

  root /usr/share/nginx/example;
  index index.html index.htm;

  server_name servername;

  location / {
    try_files $uri $uri/ =404;
  }
}

One issue I had was that https://example.com/foo was redirecting to https://127.0.0.1/foo/ instead of https://example.com/foo/.

The solution is simply to pass the original Host header to the nginx slave with proxy_set_header such that $uri/ resolves to https://example.com/foo/ instead of https://127.0.0.1/foo/. So the proper master configuration is:

server {
  listen 80;
  listen [::]:80;
  server_name example.com *.example.com;
  location / {
    proxy_pass http://127.0.0.1:4001;
    proxy_set_header Host $server_name;
  }
}


Building Magenta on Archlinux posted on 08 July 2017

I have been a bit curious about Fuchsia/Magenta. From the README:

Magenta is the core platform that powers the Fuchsia OS. Magenta is composed of a microkernel (source in kernel/…) as well as a small set of userspace services, drivers, and libraries (source in system/…) necessary for the system to boot, talk to hardware, load userspace processes and run them, etc. Fuchsia builds a much larger OS on top of this foundation.

I decided to poke a bit around. Here are a quick notes on how to build it on Arch. These instructions assume you have the usual tools like base-devel.

The canonical git repository is at https://fuchsia.googlesource.com/magenta. There is also a mirror (read only) on GitHub https://github.com/fuchsia-mirror/magenta.

Download the tool chain:

./scripts/download-toolchain

Install libtinfo from AUR:

yaourt -S libtinfo

Archlinux comes with /usr/lib/libtinfo.so.6 (not /usr/lib/libtinfo.so.5) and I didn’t feel like installing another version of libtinfo, so i just created a symlink to the new version. So far, everything seems to work fine.

sudo ln -s /usr/lib/libncursesw.so.6.0 /usr/lib/libtinfo.so.5

Build it:

make -j4 magenta-pc-x86-64

Install QEMU except if you plan to just test it on real hardware:

sudo pacman -S qemu

Run it:

./scripts/run-magenta-x86-64

Eventually looking at the new ES6 features posted on 07 July 2017

I have written some JavaScript code in the past and still do, but I still haven’t really use any ES6 features for a few different reasons:

  • I feel like most of the people jumped to ES6 because it was the new shiny thing - but without undertanding the differences.. This is very similar to people using MongoDB because it’s web scale, or React because the DOM is slow. For example as a maintainer of thinky, I had to deal with multiple reports of import not working while require was.
  • While ES6 features could have been nice when using Node.js, you cannot expect enough browsers to support all the new features.
  • Most of the new features people were talking about were just sugar to make JavaScript more Java/CoffeeScript like, and I personally find JavaScript’s paradigm pleasant to use (though I would probably not use it to build a company).

Anyway, as time passed, I recently looked a bit at all the new features, and here’s a quick post about what I found interesting/sneaky. I’ll skip generators, typed arrays and a few other features that I happened to have already used in the past.

  • Constants are “easier” to create (not sure if anyone was going through the pain of doing so with ES5).
const PI = 3.141593 
PI = 3 // will result in an error.
  • Block scoped variables are nice since they reduce the scope of variables - and make the code easier to read.
for (let i=0; i<10; i++) {
  console.log(i); // 0, 1, ...
}
console.log(i); // i is undefined.
  • Block-scoped function are a bit sneaky in my opinion considering that:
{
  function foo () { return 1 }
  console.log(foo()) // Prints 1.
  {
    function foo () { return 2 }
    console.log(foo()) // Prints 2.
  }
  console.log(foo()) // Prints 1.
}
function foo () { return 3 }
console.log(foo()) // Prints 3.
{
  function foo () { return 4 }
  console.log(foo()) // Prints 4.
}
console.log(foo()) // Prints 4.
  • Fat arrows are a nice touch to make JavaScript more similar to other languages, and probably makes it more accessible. That being said, it doesn’t reduce the total complexity around this and just add a new way to think about when evaluating this.
let a = [10];
this.b = [];
console.log(this); // { b: [] }

a.forEach((x) => {
  console.log(this); // { b: [] }
})
a.forEach(function(x) {
  console.log(this); // this === global, or undefined if you use strict mode.
})
  • Default parameter values are a nice python-ish touch in my opinion.
function sum(x, y=4, z=5) {
  return x+y+z;
}
console.log(sum(0, 0, 0)); // 0
console.log(sum(3, 0)); // 8
console.log(sum(0)); // 9
  • Rest parameter, which used to be available in arguments after a few operations is easier to grab now - though while nice I’m not convinced this is a very useful feature.
function sum(x, y, ...rest) {
  return x+y+rest.length;
}
console.log(sum(1, 2)); // 3
console.log(sum(1, 2, 3)); // 4
console.log(sum(1, 2, 3, 3)); // 5
  • You can also directly insert an array with the ... operator, though concat is cleaner in my opinion.
let a = [10, 11, 12];
let b = [20, 21, ...a];
console.log(b); // [20, 21, 10, 11, 12]
  • String interpolation is a nice touch, though like many ES6 features, JavaScript engines are not quite as performant compared to doing dumb concatenation with +.
let data = { name: 'Michel' };
console.log(`Hello ${data.name}!`); // Hello Michel!
  • You can also do custom interpolation, which I am not quite convinced about - it doesn’t strike me as a quite readable syntax.
let data = { name: 'Michel' };
console.log(`Hello ${data.name}!`); // Hello Michel!
print`Hello ${data.name}!`; // Hello Michel!
  • Sticky regex - these are definitively nice and should prevent some string manipulation.
let str = 'foo1foo2';
let pattern = /foo(\d+)/y; // Note the /y at the end.
let result = pattern.exec(str);
console.log(result[0]); // foo1
console.log(pattern.lastIndex); // 4

result = pattern.exec(str);
console.log(result[0]); // foo2
console.log(pattern.lastIndex); // 8
  • Property shorthand for object declaration seems fairly useful and unreadable in my opinion.
let x = 1;
let y = 2;
let a = {x, y}
console.log(a); // { x: 1, y: 2}
  • Directly declare function as object property. This seems to just add more confusion to a language that has already a lot of pitfalls.
let a = {
  x() { return 2 }
}
console.log(a.x()); // 2
  • Dynamically computing the name of a property is the only reasonable change I could find in how we can declare object properties now.
let a = {
  x: 1,
  ['foo' + 'bar']: 2
};
console.log(a); // { x: 1, foobar: 2}
  • You can directly assign portion of an array to variables (similar to Python):
let list = [1, 2, 3, 4, 5];
let [a, , b, ...c] = list;
console.log(a); // 1
console.log(b); // 3
console.log(c); // [4, 5]
  • Same with objects - though you can also do deep/partial matching, which is a horrible syntax in my opinion.
let a = {foo: 1, bar: 2}
let {foo, bar, buzz} = a;
console.log(foo); // 1
console.log(bar); // 2
console.log(buzz); // undefined
  • New way to define “classes” and extend them, which makes JavaScript more similar to other OOP languages like Java.
class Foo {
  constructor(id) {
    this.id = id
  }
  print() {
    return `This is ${this.id}`
  }
  static default() {
    return new Foo(0);
  }
}

class Bar {
  constructor(id, bar_value) {
    super(id);
    this.bar = bar_value;
  }
}

While talking about classes, you also have getter/setter, which in my opinion makes things more ambiguous.

  • Map and Set which allow you to clearly define your data structure instead of using a plain dictionary.
let s = new Set();
s.add('foo').add('bar').add('foo');
console.log(s.size); // 2
console.log(s.has('foo')); // true
for (let key of s.values()) {
  console.log(key); // foo, bar
}
  • There is one last interesting feature around internationalization with Intl.Collator that seems to reorder unicode characters in different ways according to your local. While this is quite interesting, I can imagine a plethora of issues coming up from having different ordering according to the user’s locale.
var list = [ "ä", "a", "z" ]
var l10nFR = new Intl.Collator("fr")
var l10nSV = new Intl.Collator("sv")
console.log(list.sort(l10nFR.compare)) // [ "a", "ä", "z" ]
console.log(list.sort(l10nSV.compare)) // [ "a", "z", "ä" ]

I think this is it for the new ES6 features that I never really looked into. After quite some time, I eventually got poke a bit more around ES6.

Linear regression in a >1D vector space posted on 03 July 2017

This is a simple example of running a linear regression on a vector space with more than one dimension.

The main take away on my end is that the * operator is overwritten with the product between tensors, not matrices - if you think about how training is done with batches, it totally make sense though.. Hopefully if you did study a lot of linear algebra but not so much tensors, you won’t have to waste time trying to figure out why your regression converge to such weird values.

import tensorflow as tf
import random

def generate_test_case(sess, input_size, output_size):
  new_input = []
  for j in range(input_size):
    new_input.append(random.randrange(-50, 50))

  # Define the model we are interested to discover.
  # This assums input_size=2 and output_size=3
  W = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
  b = tf.constant([10, 13, 22], shape=[1, 3])

  x = tf.constant(new_input, shape=[1, 2])
  test = tf.matmul(x, W) + b
  new_output = sess.run(test)[0]

  return new_input, new_output

def main(input_size, output_size):
  # Generate the linear model we want to build.
  x = tf.placeholder(tf.float32, [None, input_size], name='x')
  W = tf.Variable(tf.zeros([input_size, output_size]))
  b = tf.Variable(tf.zeros([output_size]))
  model = tf.matmul(x, W) + b

  # Build the loss function.
  sess = tf.Session()
  y = tf.placeholder(tf.float32, [None, output_size], name='y')
  squared_deltas = tf.square(model - y)
  loss = tf.reduce_mean(squared_deltas)

  # Define how we'll optimize W.
  init = tf.global_variables_initializer()
  optimizer = tf.train.GradientDescentOptimizer(0.001)
  train = optimizer.minimize(loss)
  sess.run(init)

  num_training_cases = 100 # Number of training cases
  train_input = []
  train_output = []
  for i in range(num_training_cases):
    new_input, new_output = generate_test_case(sess, input_size, output_size)
    train_input.append(new_input)
    train_output.append(new_output)

  print('Initial loss %d' % sess.run(loss, feed_dict={x:train_input, y:train_output}))
  print()

  for i in range(10000):
    sess.run(train, {x:train_input, y:train_output})

  print('---------------------')
  print('W\n %s' % sess.run(W))
  print('b\n %s'% sess.run(b))
  print('Loss %d' % sess.run(loss, feed_dict={x:train_input, y:train_output}))
  print()

  print('---------------------')
  print('Evaluating with a single case')
  eval_input, eval_output = generate_test_case(sess, input_size, output_size)
  print(eval_input)
  print(eval_output)
  print(sess.run(model, {x:[eval_input]}))
  print('Loss %d' % sess.run(loss, feed_dict={x:[eval_input], y:[eval_output]}))

if __name__ == "__main__":
  main(2, 3)