15 Oct 2017 • Optimising vs expanding to fill all available resources

Parallelising code does not make it faster.

You actually run slightly slower, because you have to deal with the overhead of dispatch and context switches and expensive futex calls. But we do it anyway because it makes code run in less time. So you trade CPU time for wall clock time. Or throughput for latency.

In games, people use thread pools to go wide when they have lots of the same work that must must be done to get a frame out. Things like culling, broad-phase collision detection, skinning, etc.

It's not immediately obvious that high framerate corresponds to low latency rather than high throughput. If you think of a frame as taking the inputs at the start of the frame, like the state of the world last frame and player/network inputs, and then producing the next frame as output, it kind of makes sense. You're reducing the latency between receiving the inputs and spewing the output.

It's also really surprising how little you benefit from using multiple threads. A typical desktop PC has 2 or 4 cores. The Steam hardware survey says that's 95% of the market (the gamer market even!), you're looking at less than 4x speedup.

That's a bad habit I need to break. When something needs optimising one of the first things that comes to mind is "put it on the thread pool". On one hand it's easy (ADDENDUM: not gonna edit this out but of course threading is not easy), on the other it's junk speedup and other optimisation methods are not a huge amount harder. Parallelising my code should probably be the last optimisation I make!

Anyway I was thinking about this because of all the Firefox talk about having one thread per tab and GPU text rendering and GPU compositing and etc. Ok Firefox runs in less wall clock time because it has 4x more resources, but now my whole PC runs like trash. The reason multi-core CPUs were such a huge upgrade when they first came out was that shit apps didn't make your PC unusable anymore! But now the shit apps are becoming parallelised, we're going back to the bad old times.

The GPU stuff isn't in yet but I'm looking forward to the "we made our code 10x slower but put it on hardware that's 100x faster!!" post, swiftly followed by having to close my web browser whenever I want to play games.