How does the Intel CPU design flaw (to be patched at kernel OS level) affect audio apps?

I don’t know what the severity of this issue is for us and would like to know the opinion of people that have more low-level knowledge.

As much as I understood from a couple of posts I read so far, operations involving calls to the OS kernel will be the ones slowed down substantially? So FS operations, for example - does that mean that DFD Streaming will suddenly get a big downgrade in performance with an OS update?

Thanks for any and all info you are willing to share.

Pretty sure it’s been said program-local code shouldn’t be affected, of course this may vary by OS. As a result code running in the audio thread shouldn’t see any real performance hits, since we’re all good programmers who don’t use locks or syscalls in our audio callbacks. :slight_smile:

I doubt disk streaming speed will be affected in any noticeable way, because the bottleneck there will undoubtedly be the disk itself, as it was before the flaw was found.

In the context of audio software, off the top of my head all I can think of as having effects from this bug is program initialization (talking to audio drivers, allocating lots of memory) and maybe some stuff happening in the message thread involving memory.

Generally audio doesn’t use tons of syscalls, so hopefully we’re good…

1 Like

Thanks for the reply.

What does “program-local code” mean?

Here is the quote that got me alarmed (specifically the statement “call into the kernel took twice as long”):

The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don’t use the kernel much might see a hit of perhaps 2-3 percent—there’s still some overhead because the kernel always has to run occasionally, to handle things like multitasking.

But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing other than call into the kernel saw its performance drop by about 50 percent; in other words, each call into the kernel took twice as long with the patch than it did without.

This is from this article.

When I say “program local” I mean code that doesn’t have to leave its own virtual address space, i.e. doesn’t call kernel code to do networking, filesystem, talk to drivers directly, free/allocate memory, etc. Really anything that has to talk to the kernel or pass through the kernel into protected memory space, which is what the bug is exploiting.

The performance losses are occurring because the hot fix workarounds being rolled out immediately focus on manually flushing certain CPU caches (from what I gather, branch prediction caches) in order to ensure malicious code with kernel-level access to memory isn’t running through the branch predictor.

Since this is a software workaround, it is essentially crippling the hardware in certain circumstances (i.e. entering a syscall) to compensate for the issue since hardware obviously can’t be updated with bug fixes like software can.

Those of us who do realtime programming on non-realtime systems (read: audio on typical PCs/mobile) don’t need to worry about the performance implications of this since we never do syscalls in the audio thread (or at least we’re really not supposed to) due to the indeterministic amount of time they will take - a notion reinforced by this surprise performance degradation.


I think there is a problem with branch prediction, computed jumps, such as virtual functions which could cause a big performance hit.

On Windows, most audio drivers run in kernel mode to communicate with the hardware. So as far as I understand, we may potentially se a slight performance hit on start of each audio frame (when kernel audio driver invokes user mode ASIO client to produce the audio callback).

1 Like