12 Comments
Feb 21Liked by Jos Visser

> Question from a European, WTF is actually "balancing a checkbook"? I've heard the phrase coined hundreds of time, but I got rid of checks in the early 1990s and hadn't seen one until I moved to the US, and even here I write at most one a quarter or so because, in essence, the US banking system is stuck in the eighties, which was a great decade to remember for its music and hairdos, but not for its consumer banking.

I hate "balancing the checkbook". I had to write a pile of checks for a while to contractors. I realized that if I open a 'second checking account' and only write checks out of that, I just move money into that account when I write a check and when the balance on that account is zero, all the outstanding checks have been paid. Works pretty good, but doesn't solve the "why is there still $X sitting in this account a month later". But seems to always get to zero because folks eventually cash it and then I remember.

Expand full comment
Feb 21Liked by Jos Visser

> Well, I have seen this tried a dozen times before and I have never seen it work, but maybe this time we'll get it right

The height of disagree and commit! 😂

Expand full comment
Feb 21Liked by Jos Visser

> Must be this tall to write multi-threaded code.

https://bholley.net/blog/2015/must-be-this-tall-to-write-multi-threaded-code.html

Expand full comment

Great article! Thank you, Devil! I mean, “Jos”.

Joking aside, this is really interesting. I agree that tools may have changed, but working in groups of people has not (though, we are better at remote, thanks to the tools). I also agree about math and concurrency.

One thing I wonder about is the level abstraction. The book “Range” talks about the fact that IQ scores keep going up, despite normalization and cites increased ability to reason abstractly. This is a very slow effect, but maybe this is making things better over time too. So, maybe new people could acquire experience from older devils faster as time progresses? Maybe older devils should help facilitate that?

Expand full comment

NUMA was the original preferred multi-processor architecture because it matched the hardware well, but it fell by the wayside for a couple of reasons;

1. It didn't fit the popular threading model.

2. It made buying memory expensive, because you needed to buy memory-modules per-core.

3. It didn't fit the dominant threading model.

The "inefficient large pool of shared memory" effect was precisely because the dominant paradigm for parallel-processing was threading. Well, that's not 100% true; there is also a class of problems usually assigned to super-computers that make it difficult to partition the data (usually huge matrices) per-core. But those problems are pretty rare and specialized; map-reduce shows that the vast majority of huge problems can indeed be solved by partitioning the data.

On the small scale, the way CPU's have evolved with huge multi-layer caches mean we effectively are using NUMA; per-core-cache is the new RAM, and RAM (and shared cache) is the new disk, and thrashing cache-lines is the new accessing-remote-ram.

And of course on a larger scale, your whole datacenter is a NUMA architecture.

Expand full comment