In this edition, we have pulled together a mix of things that actually made us pause, dig deeper, or laugh out loud. You'll find thoughts on where languages like Rust and Go are heading, a behind-the-scenes look at how Figma scaled their databases without crashing and burning, and a seriously useful VS Code extension for comparing LLMs. Oh, and for no good reason at all - except that it's awesome - we've got a terminal cow that talks back.
Let's jump in.
The programming landscape has always been dynamic, with languages evolving to meet the ever-changing demands of technology and industry. From enterprise systems to cloud-native applications and gaming, each programming language plays a unique role in shaping the technological world we live in. This blog takes an in-depth look at some of the most influential languages today - Java, C#, Rust, Go, and C/C++ - and explores their strengths, challenges, and the roles they are likely to play in the next 20 to 30 years.
When Figma's user base exploded 100x since 2020, their databases started sweating - hard. Tables were hitting billions of rows, and their old scaling trick (just split data by feature into different DBs) was no longer cutting it.
So the team did what most devs dread: they redesigned the entire data layer to support horizontal sharding. And they pulled it off - with minimal downtime and zero fire drills.
They introduced "colos" - bundled tables that scale together in a clean, predictable way. To keep things flexible, they separated logical sharding from physical sharding, which let them move fast without blowing things up. And instead of rewriting half their codebase, they picked smart sharding keys like FileID to keep changes minimal and targeted.
It took 9 months, but now Figma is ready to scale for the next 10x - without duct tape.
If you've ever wondered "which model answers this prompt best?", Prompt Octopus is your answer.
It's a VS Code extension that lets you select any prompt from your code, choose from 40+ LLMs (OpenAI, Anthropic, Mistral, Grok, etc.), and view their responses side by side - all from your editor. You can keep your API keys local for privacy, or use their hosted plan if that's easier. Your first 10 comparisons are totally free - no API keys required. If you end up loving it, the full hosted access is just $10 a month.
Whether you're doing prompt engineering, model eval, or just trying to get better completions - this saves time and guesswork.
👉 Read the full postYou deploy a web app. Traffic rolls in. But under the hood, how do Apache, Nginx, MySQL, and PostgreSQL actually handle that load ?
Apache is the old workhorse. Depending on its config, it can either use thread-per-request (with the worker or event MPMs) or go fully old-school with process-per-request (with the prefork MPM). In the latter, every single request spins up its own process.
Nginx does things differently. It's event-driven and asynchronous. It doesn't spawn threads or processes per request. Instead, a few worker processes handle thousands of concurrent requests using non-blocking I/O. It's lean, fast, and doesn't break a sweat under high concurrency - exactly why it's become the default for modern web serving.
Postgres uses a process-per-connection model. Each incoming DB connection gets its own OS process. It's safe and simple - but expensive at scale. Without a connection pooler like PgBouncer in front, things get heavy fast.
MySQL is a bit more flexible. It uses a thread-per-connection model. Every client connection gets its own thread - lighter than Postgres's processes, but with slightly higher risk of shared-state issues.
These choices affect scalability, fault isolation, and performance tuning. Throw enough traffic at your app and you'll feel the difference - some stacks crumble, others cruise. Understanding these models helps you scale smarter and debug faster.
And before we wrap, something on the lighter side: cowsay. Completely unnecessary-but weirdly satisfying. This little Linux terminal tool takes any prompt you give it and has a talking ASCII cow repeat it back to you. Want a productivity boost? Not really. Want to make your terminal demo unexpectedly fun? Absolutely 😄
It even supports alternate characters like dragons, tux, and ghostbusters - because why not have a fire-breathing ASCII creature deliver your git blame message?
Try it yourself:
cowsay "My code works. I have no idea why."
& watch the cow prompting you back
_________________________________
< My code works. I have no idea why. >
---------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
These were the original LLMs. Invented in the ’80s. Prompt it, and it talks back 😄
That's it for now - from serious infra to silly cows, it's all part of the stack. See you in the terminal.
Liked what you read? Sign up for our newsletter below 👇