As @PluginPenguin pointed out, the cores don’t matter. Just to give you an additional hint:
The threads are an abstraction layer of physical cores AND timeslices. When we had single core machines, we still had threads “parallel” running (the term might be justified, since for all other threads the time was frozen). One thread would run for a while and later be put aside (with all registers and cache, which is one piece of the overhead you get for parallelising).
The other problem you have, the audio stream is one entity to produce, so even if several threads would contribute, you rely on the slowest to finish before you can deliver the audio signal to the driver. So most of your threads are waiting for each other.
Last but not least, most of the time, your software is not the only thing running. Especially in a DAW, you have many plugins on several tracks and software instruments. The host is already parallelising these, and it can do it much better by using one thread per track and mixing in one “master” thread, that is triggered by the audio hardware.
About threads in JUCE:
Every application has at least one thread, in console applications it is just the execution of the int main(int, char**)
, in GUI applications it is the MessageThread, that is run by the OS. Here you get messages, that are handled one after each other. A Timer is simply telling the OS “call me in n msecs”. If the OS is busy, this might arrive a little later. But it is still the message thread, that executes it.
Every audio application has automatically a second thread, that is run by the audio hardware. It is started either by adding an AudioIODeviceCallback to an AudioIODevice, or because we allowed a host to load us, so the host will call as audio thread our processBlock() method.
The challenge is not to use the most patterns and techniques, but to achieve your goal with minimum effort (for you and the CPU).
(Sorry, got a little longer)