I can now tentatively say that there will be speed-ups - and possibly quite significant ones - in #faircamp 2.0.
For one, if you're using the graphical interface, faircamp will¹ "pre-preprocess" (= analyze audio and generate waveforms for) your entire catalog in the background while you're just still exploring the interface and fiddling with texts and settings. I'm planning to also expose this as a `faircamp --warmup` (or so) option in the CLI, so that you can let this pre-preprocessing run while you're still editing things in the manifests (way before building). So in both cases, when you finally request a build, a considerable portion of the required processing might already be ready and waiting in the cache.
Then, because of the graphical user interface, I needed to make all internal data shareable across threads (that's pretty much done by now), and in turn we can reap the benefits in the form of concurrent processing.
I just now naively parallelized decoding, peak generation and encoding, testing with a single album with 17 tracks, and I'm seeing a ~5.5 times faster completion, or in other words, on my machine², a build that took one hour could be done in roughly 12 minutes now, if these figures hold in the larger picture.
(¹) implementation pending but I see zero blockers to that
(²) 16 cores, average i7