Hackathons and fixit weeks are bandaids to deeper problems posted on 04 July 2024

Hackathon and fixit weeks are fun weeks in my opinion – it’s a time where you usually park your usual work on the side and build/fix things. So usually your week is lighter on meetings and you get to write more code (that’s what software engineers like to do right?)

With that being said, the need for such weeks (especially fixit weeks) reflects a deeper problem – engineers don’t have the flexibility/time to hack/fix things during normal time. This is problematic because it’s more efficient to regularly address minor tech debts than waiting a long period and doing a massive rewrite.

Leadership is also often too far from the code to grasp the right balance between tech debt and productivity in many low level situations – the right people to make these trade offs are engineers on the ground, but if they are not given the opportunity/time to perform these tasks, no tech debt is essentially addressed on a regular cadence.

Hackathons are slightly different though – while I think people should hack a prototype anytime the project is impactful, hackathons also serve as social events. Hackathons are an opportunity to work with people you usually don’t interact with. So even if people are entitled to build a prototype at any time, there’s still value in having these social events. It’s a slippery slope though where engineers might naturally wait for the hackathon rather than building their prototype when they need it. I personally regularly build minor tools to make my life easier and don’t wait for specific weeks – but I’m also given the freedom and responsibility to be perform impactful work that many may not have.

What are your thoughts on hackathons/fixit weeks?

LinkedIn post

Learn for the long term posted on 03 July 2024

We live in a society today where every gain is targeted to be short term – this is obvious with apps like TikTok where you get instant gratification but the same is also true for learning (including software engineering skills). We are drowning in bootcamps to learn X, videos to become an expert of Y or articles teaching you everything about Z in a day.

The main issue with these intense classes/articles is that while you may learn something in the short term, very little will stick over time. Your brain needs time to process new information and deeply understand it – that’s also why learning by doing is more efficient than just reading articles, you naturally have more time to process information while performing actions.

For what it’s worth, this is pretty apparent in interviews in my experience. While I do not expect every candidate to be an expert in caching, network or any topic, it’s easy to separate those who just skimmed over an article and those who actually understand these domains. For example, candidates that had to deal with large scale systems relying on databases can talk clearly about when writes should be acknowledged, whether they need shards or replicas, or what happens when a replica goes down.

I personally recently had some issues with my remote dev machine (I’m dogfooding a new dev environment at Databricks) and had to recover some data from a borked machine. I haven’t tinkered with a linux machine for quite a while now (~10 years? I’m using a macbook now and at Google everything used to just work) but I could still remember how to use fdisk/mount/df/etc. to recover my data. This knowledge is something I learned while playing with Linux a long time ago that stayed around – if I had just read an article about how to mount a disk 10 years ago and never did it, I would not have been able to perform such operations today.

So rather than just reading many short articles on every possible domain, take the time to learn a few things well that will stay with you. For example codingchallenges.fyi (from John Crickett) is a pretty interesting place to learn for the long term – looking at the calculator question, you get to understand (and implement) parsing and interpreting expressions. If you actually implement it once, it will likely stick with you forever – I personally still remember very similar code patterns/implementations from 15 years ago.

LinkedIn post

Get your ergo setup posted on 02 July 2024

Every now and then I hear software engineers complaining that they cannot expense their ergo setup (or only partially or that it takes too long). My 2 cents is that you should just get your ergo setup – don’t wait and don’t fret on reimbursement.

The truth is that as a software engineer, you likely make enough money to afford your $100 ergo keyboard without feeling any financial consequences. On the other end, you have one body – don’t develop chronic wrist pain because of a bad keyboard, back pain because of a bad chair or other chronic pains for a negligible sum of savings. I have the chance to not have any chronic pain but I’m well aware that with age (😮‍💨) I’m getting more sensitive to bad postures – my wrists start to hurt if I work a few hours on a bad keyboard.

The most common advice I get from older peers/friends is to treat your body with care. Even if you can still work hours on a regular keyboard, you probably should consider an ergonomic keyboard – it may require a bit of learning but you should power through the changes. They are good in the long term

There’s no point in growing as an engineer, making more money, learning a lot of things if you end up in so much pain that you can not work anymore. Even if it doesn’t go as far as preventing you from working, chronic pain will definitively make your work life balance tougher.

LinkedIn post

You can scale a monolith posted on 01 July 2024

There is so much content on the internet claiming that monoliths are not scalable. These authors likely never had an opportunity to actually lead a monolith server in a large scale environment and are just repeating inaccurate thoughts on the topic. Let me get this straight, you can scale a monolith way beyond what you will need.

To give you more colors, YouTube used to be a monolith – and at that time it was likely handling more traffic that your service will ever need.

It would take too long to refute every claims against monolith but I thought I would pick and address a few:

  • Monoliths are slower because they lack parallelization – this is not accurate simply because you can use multiple threads. Interestingly enough, if you don’t want to deal with threads, you can just send yourself a RPC (and do basic async IO).
  • Monoliths are wasting more resources than micro services because you have less granular control of your fleet resources. If your monolith has different resource needs, you can just create different pools of servers – e.g. while you have a single binary, you can have a pool taking all RPCs that require a lot of memory and another pool for low memory RPCs. Breaking your monolith into pools present new challenges but these are already present with micro services
  • Monoliths have bad failure isolation. I think this is partially true but only for some class of issues like machine wide issues (e..g bad VM, OOM etc.). For business logic errors (by far the most common ones), if your function fails, it doesn’t matter if it fails in a RPC or in an inline call, you still have to handle the error

I’ve barely scratched the surface here, but scaling a system, whether it’s a monolith or microservices takes skills, thoughts and trade offs – thinking that scaling a system is a well defined set of questions with clear-cut answers is the most obvious mistakes you can make.

LinkedIn post