Over the years I have observed that many engineers tend to attribute much of the success or failure of a company to the technical choices it made. I know I’m often guilty of this too. And while it is often justified, I would argue that for the vast majority of startups out there, the choice of programming language, framework, or even database doesn’t matter that much, especially in the early stages.
Through the lens
This perception is understandable, we as engineers tend to look at the world from a technical lens, and are often biased by what we know best. Our daily activities may include things such as debugging CI pipelines, implementing new features, pairing with colleagues, or migrating the always present legacy codebase. The environment that surrounds us makes it easy to believe that it all boils down to those things that we see and understand. It’s an illusion that makes us feel like we’re fully in control of what makes or breaks the product.
Don’t get me wrong, it can be a huge advantage for many companies to make their product 3x more efficient than competitors, or to have elegant, easy-to-extend code. But you might be focusing on the wrong problems if nobody cares about the product you’re actually building, and sooner or later your business will hit this wall.
I’m not saying that tech doesn’t matter. A solid foundation for your startup goes a long way. If investing in tech allows you to build better features faster than your competitors, more power to you. But finding the right balance is highly dependent on what you’re trying to solve and the resources you have at hand. There’s no right or wrong way to do it, and as usual, it mainly comes down to tradeoffs.
Boring is fun
I believe aiming for a healthy balance of risk vs reward when it comes to your technical choices is something to strive for. In particular, if it decreases the chances you get stuck on the wrong problems down the road.
This is why I have come to appreciate ideas such as Choose Boring Technology. This is often interpreted as “picking old technologies over newer ones”, but it doesn’t need to be. For me, this comes down to sticking to what I already know and trust, but allowing myself to experiment with newer tools if I might benefit from them.
Maybe you want to gain more experience by using the latest framework or programming language, or you just want to have some fun. You do what makes you happy. But if you’re trying to make a decision to increase the chances that your product or business will succeed, it’s worth stepping back and considering your options.
For me, mainly choosing software that has been around for longer is not about it being boring or older, it’s about the fact that the ways in which it fails are better known. There are fewer unknowns for you to deal with and this maximizes your chances of shipping your project.
Take for example the other day, I had an issue with my Django app and a quick search leads me to hundreds of answers in various forums and websites. It took me at most 10 minutes to get back on track and that was the end of this issue. I experienced the exact opposite a few years ago with a popular, but not so battle-tested Scala library my team had been using for a while. We were probably among the first to encounter the issues we were facing, and nobody had walked down this path before. Maybe it sounds like a fun challenge or a great chance to contribute back to OSS (which I’m happy to), but once you solve it, do your customers really care about it? How many days, weeks, or even months are you willing to invest in such issues? In my case, I’d rather use that time to ship new features or improve the existing ones.
Proven tech vs new tools
I tend to follow an 80/20 distribution when it comes to the choice of tools. That means my stack consists of about 80% of tools I already know well, but I do allow myself 20% of my capacity to explore new exciting tech. The ratio doesn’t matter much, so don’t get caught up on that, it’s just easy to remember and leans towards using proven technologies. It’s also in a way how Multi-armed bandits work. You try to maximize your expected gain by allowing yourself to explore new options while taking advantage of those that have worked well in the past.
For example, Panelbear started as an embarrassingly simple Django app with no charts or client-side code, everything was rendered on a plain HTML table, and all analytics were done on an SQLite database. Took literally a weekend to get it up and running including manually deploying it to a $5/mo VM. Low risk and high reward for my needs at the time. I didn’t need to learn any new tech to try this out, and the effort consisted of mostly writing the actual code that stores and queries the analytics data, instead of trying to bend the latest framework for my needs.
Fast forward and as I added more features and began handling more page views for various websites, I started to notice that the codebase could use some refactoring. It also became increasingly repetitive to do things like deploying to new instances, issuing SSL certs, and keeping the DNS records up to date in case the IP address of my instances changed. As a second iteration I changed to a docker-compose setup plus lots of glue code, but soon enough I found myself reinventing pretty much what Kubernetes does. Yes, there are multiple ways to solve each of these issues, and for me upgrading to a Kubernetes-based stack made things simpler, not more complex. It also made it trivial to move from DigitalOcean to Linode, and most recently to AWS (each migration took an evening of mostly changing my Terraform files and hitting deploy – yes, I’m being serious). But that’s for another post.
Yes, I know Kubernetes might be an absolute overkill for a lot of projects, especially if you’re new to it. But it allowed me to simplify the operational aspects tremendously. That said, I was already pretty comfortable and productive with this stack, so I wouldn’t bindly recommend it to everyone. Do what you know best.
For example, when I wanted to experiment with using Clickhouse for the ingestion and aggregation queries, it took me less than 10 minutes to write a basic deployment manifest and have it up and running. This includes automated SSL certs, in-cluster service discovery, and logging/monitoring out of the box. It was a huge win since it allowed me to try things out faster than before.
Even better, I can deploy it and operate it the exact same way as I deploy anything else on my cluster. Need more volume storage with zero downtime? It’s a simple manifest change, git commit and deploy. The same when I needed Redis for caching, I was up and running in minutes, without increasing my costs or adding operational complexity. But enough about Kubernetes, I’ll leave those details for another blog post.
Focus on shipping
My point is, I moved into these technologies as the pain with the previous solution was higher than dealing with the new tech. I’m also already very familiar with all these tools as I’ve been using them every day for my full-time job. But more importantly, it helped me ship features even faster to my customers while reducing the operational overhead for me.
If I had started with the more advanced setup from day one, I might have lost all motivation before I would have had the first MVP of Panelbear. Also, I could have focused on the wrong problems since I would just have a wild guess of what future pain points might be. The key is to increase complexity as needed for your specific problems, not for imaginary or future problems.
Hope you enjoyed this blog post. I will be writing more about Panelbear’s tech stack, and lessons learned along the way. So stay tuned!