AudioTransportSource - new Thread parameter?

There’s a change that I don’t seen announced - AudioTransportSource seems to need to be given a thread now. I guess it used to make it’s own thread, now I should be making one?

What are the thoughts on that? To use the same thread in a few areas? How is the new design intended to improve things?


Yeah, it was really bad practice for me to have been using a secret static thread behind-the-scenes to operate that class, so I finally got rid of it.

By making it explicit there are no hidden inter-dependencies, and you get full control over the thread that it being used for buffering, so you can control when it gets started, stopped, deleted, etc. You could also share your audio buffering thread with other tasks if you want to, or use separate threads for different players.

But mostly it’s just a very bad design to have hidden shared singletons being used internally by classes, especially in public code like this.

I understand, but I feel like I’m doing something wrong then. For each thing I’m buffering (generally just two, with only one doing a lot of work most of the time) I made a TimeSliceThread. I start it with (now) priority 9, and pass it to the AudioSource when I request buffering. Everything else is the same.

I get a very choppy result when playing back. In other words, the change has broken code that was working well.

Any idea? Did the new buffering scheme work in real-world tests?


Oh, no - don’t create a thread per item! Just create a single thread, and use it to process all your sources. That’s the whole point of a TimeSliceThread - it’s a single thread that is shared between a bunch of clients.

1 Like

Yeah, but I only have one thread working and one idle, and its still not working right. Taking a listen - maybe it’s related to re-sampling too? The choppy audio sounds slowed down too.


Silly question, but did you remember to actually start the thread you created?

Yup. I also tried to start/restart it just before I do the set. If I don’t set the thread it does sound different. Could the combo resample/buffer be causing a problem?

Maybe restarting it is the problem? If you just create a single thread and leave it running, then it should behave exactly like the old code did, since that’s all that it used to do behind-the-scenes.

That’s what I have. The restart was an experiment - I tried doing it just before I changed speed = resampling rate.

All code as before, except added buffer thread - choppy, sounds like about 1/2 speed playback.
Increasing the output’s buffer size - smoothes things out a bit, still choppy.
Not starting the thread at all - can’t really tell how it sounds, it freezes up my playback.
Not using a buffer - sounds godawful.

So - stepping back one. The reason I was updating my app was to catch up: with juce, and ffmpeg. Let me go and see if they broke audio stream timestamps somehow. That might also explain the problem, I think.

It just so happens that the juce fix was highly related to audio buffering - might have been a red herring.

OK, moved to my previous FFMPEG version, that works fine - still bad audio.

It seems likely to be a juce problem at this point… but maybe more with resampling than buffering?

Yeah, I think it’s unlikely to be the thread that’s causing the problem, maybe look elsewhere first.

Okay, well I went back and forth on enabling and disabling, moving back and forth on my git tree and the juce git tree. The modules branch complicates that, of course.

Still having a lot of trouble pinning it down. The output thread isn’t getting starved - I always have the right data for it, so possibly the buffer thread is innocent. I did suspect the resampling source, but tone is OK, and A/V sync is off, but seems consistent.

So - one thing I found is that when I cranked the output buffer size way, way up - max of the audio setup dialog - 42 ms, it starts to sound better. Still very choppy, but better. It was fine at 11 ms before.

So - it’s a Mac, OS 10.6, tip tip Juce. Did some thing happen in outputs or the mac audio output specifically?

It’s connected to the granularity of the sample buffer size. When the buffer is at the size (or smaller, I presume) of the callback - which seems to be 2048 on the Mac, playback is very poor. It used to be fine at 2048. Now I need to go up to 8000+ to get smooth.

This repeats on the JuceDemo - set the buffer size to 2048 and you’ll hear the problem.

Should I be doing something to deal with extra latency in this case? It appears to be off now (it’s audio for video playback).


Sorry, I’m confused about which sample buffer you mean… If you mean the size passed to AudioTransportSource::setSource, then you’d certainly need that to be at least twice as big as the size of the audio callback buffer, otherwise the callback will always need to block and wait for it to fill up. But that’s not something that could have been different in the old version AFAICT(?)

Apparently it was. I cranked it up, it seems to work.