Block Size on iOS 8

Hi,
this is the first time I try to build an iOS app. I started from the provided example AudioAppExample.

I am trying to set the block size using

    juce::AudioDeviceManager::AudioDeviceSetup result;
    deviceManager.getAudioDeviceSetup(result);
    result.bufferSize = 256; //this line does not have any effect
    deviceManager.setAudioDeviceSetup(result, true);

but calling it in the MainComponent() constructor or in prepareToPlay() has no effect and the block size defaults to 1156. In fact, at those points in the code, deviceManager.getAudioDeviceSetup(result).bufferSize is 0.

Where is the appropriate point in which to call setAudioDeviceSetup(result, true); so that it actually affects the blocksize?

Using the latest master from Juce, and targeting an iPad mini retina running iOs 8.0.2

Thanks,
Giulio

Full code of my MainComponent.cpp is below

/*
  ==============================================================================

    This file was auto-generated!

  ==============================================================================
*/

#ifndef MAINCOMPONENT_H_INCLUDED
#define MAINCOMPONENT_H_INCLUDED

#include "../JuceLibraryCode/JuceHeader.h"


//==============================================================================
class MainContentComponent   : public AudioAppComponent
{
public:
    //==============================================================================
    MainContentComponent()
        : lineLength (0.05*44100),
          currentLine (0),
          sampleRate (0.0),
          expectedSamplesPerBlock (0)
    {
        setSize (800, 600);

        // specify the number of input and output channels that we want to open
        setAudioChannels (0, 2);
        juce::AudioDeviceManager::AudioDeviceSetup result;
        deviceManager.getAudioDeviceSetup(result);
        result.bufferSize = 256; //this line does not have any effect
        deviceManager.setAudioDeviceSetup(result, true);
    }

    ~MainContentComponent()
    {
        shutdownAudio();
    }

    //==============================================================================
    void prepareToPlay (int samplesPerBlockExpected, double newSampleRate) override
    {
        sampleRate = newSampleRate;
        expectedSamplesPerBlock = samplesPerBlockExpected;
    }

    /*  This method generates the actual audio samples.
        In this example the buffer is filled with a sine wave whose frequency and
        amplitude are controlled by the mouse position.
     */
    void getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill) override
    {
        bufferToFill.clearActiveBufferRegion();
        for (int chan = 0; chan < bufferToFill.buffer->getNumChannels(); ++chan)
        {
            float* const channelData = bufferToFill.buffer->getWritePointer (chan, bufferToFill.startSample);

            for (int i = 0; i < bufferToFill.numSamples ; ++i)
            {
                if(currentLine>0){
                    channelData[i] = (currentLine--)/(float)lineLength;
                }
            }
        }
    }

    void releaseResources() override
    {
        // This gets automatically called when audio device parameters change
        // or device is restarted.
    }


    //==============================================================================
    void paint (Graphics& g) override
    {
        // (Our component is opaque, so we must completely fill the background with a solid colour)
        g.fillAll (Colours::black);
        juce::AudioDeviceManager::AudioDeviceSetup result;
        deviceManager.getAudioDeviceSetup(result);
        g.setColour (Colours::lightgreen);
        String blockSize;
        blockSize << "Current block size: " << result.bufferSize << "\n";
        g.drawText(blockSize, 200, 100, 300, 200, 1);
        
        g.setColour (Colours::grey);
    }

    // Mouse handling..
    void mouseDown (const MouseEvent& e) override
    {
        if(currentLine == 0){
            currentLine = lineLength;
        }
    }

    void mouseDrag (const MouseEvent& e) override
    {
    }

    void mouseUp (const MouseEvent&) override
    {
        repaint();
    }

    void resized() override
    {
        // This is called when the MainContentComponent is resized.
        // If you add any child components, this is where you should
        // update their positions.
    }


private:
    //==============================================================================
        int lineLength;
        int currentLine;
    double sampleRate;
    int expectedSamplesPerBlock;

    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MainContentComponent)
};


Component* createMainContentComponent() { return new MainContentComponent(); };

#endif  // MAINCOMPONENT_H_INCLUDED

That’s a preferred block size. There’s no guarantee that the device will agree to use the value that you pass in there. But 1156 is a very strange value, so it does sound like maybe you’re doing something wrong.

Obviously attempting to set up the device in prepareToPlay is at best going to fail, and at worst could do all kinds of horrible things, because it’s already running at that point, and it’s called from the audio thread.

Talking about doing it in your MainComponent constructor could mean anything, as it depends when your code creates that object.

Thank Jules,
so where is the best place to call it, in the hope that it would work?
Where it is just now in my code above, it is called after setAudioChannels(), which in turn calls deviceManager.initialise().
I tried this on an iPhone 6/iOS 9.2 and I get exactly the same behaviour, that is I get a displayed value of 1156, but the actual latency I feel when tapping the screen is noticeably lower.

Doesn’t really matter when you call it, as long as it’s safely done from the message thread when your app is running.

I’m seeing this also, and the culprit seems to be some AudioUnit related code which was last modified in the 4.2 mega-commit:

if (AudioUnitGetProperty (audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &framesPerSlice, &dataSize) == noErr
       && dataSize == sizeof (framesPerSlice) && framesPerSlice != actualBufferSize)
{
    actualBufferSize = framesPerSlice;
    prepareFloatBuffers (actualBufferSize);
}

The actualBufferSize assignment here overwrites the buffer size reported by the AVAudioSession. So the callbacks will still be using whatever he AVAudioSession reports, but the AudioIODevice reports something else.

The kAudioUnitProperty_MaximumFramesPerSlice is the more reliable way to get the maximum buffer size (which in deed is 1156 on many iOS devices). The buffer size reported by AVAudioSession is the buffer size that iOS will usually use, but iOS will still use larger buffer sizes on occasion - but never more than kAudioUnitProperty_MaximumFramesPerSlice. This was observed with our ROLI’s in-house NOISE app.

Does “on occasion” here mean that the buffer size might be constantly at the higher rate in some situations, or that every now and then there will be a single larger buffer?

I’m asking because we need to have a very good estimate of latency in our app, and this really makes a big difference to us. I’d really like to see some better API for this: reporting a larger buffer size when most of the time it’s running at the lower size just doesn’t work out at all from the latency estimation perspective…

Ok, so I found out that this can be “fixed” by simply doing

    UInt32 framesPerSlice = actualBufferSize;
    UInt32 dataSize = sizeof (framesPerSlice);
    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &framesPerSlice, dataSize);

in createAudioUnit().

Do you have any reason not to include this fix? I need to make a build today, so I’d rather wait for this to be fixed in git than fix in our fork…

EDIT: Forget that. The value gets overridden in AudioUnitInitialize (audioUnit); and asking for the value after that is returning 4096 on my test device. I don’t think we should be asking for the value before AudioUnitInitialize in any case…

Now I think the best solution would be to add a maxBufferSize to the driver (and initialize it properly). The documentation for audioDeviceIOCallback already says

The number of samples will depend on the
audio device’s buffer size and will usually remain constant,
although this isn’t guaranteed, so make sure your code can
cope with reasonable changes in the buffer size from one
callback to the next.

, so I don’t think we should be modifying the actual buffer size the driver reports…

Yeah, you can’t set kAudioUnitProperty_MaximumFramesPerSlice - I tried that as well. I think the current JUCE code is correct. With the current state of the code, you can get the maximum buffer size that will be called with AudioIODevice::getCurrentBufferSizeSamples and the latency with AudioIODevice::getOutputLatencyInSamples and AudioIODevice::getInputLatencyInSamples. They will not be the same: the latencyInSamples is typically much lower than the currentBufferSizeSamples on iOS devices. The max buffer size will only be used on occasion when - for example - switching apps or locking/unlocking the device.

[quote=“fabian, post:9, topic:17280, full:true”]
Yeah, you can’t set kAudioUnitProperty_MaximumFramesPerSlice - I tried that as well. I think the current JUCE code is correct.[/quote]

I’d say it’s not correct:

  1. The value that kAudioUnitProperty_MaximumFramesPerSlice returns changes right after calling AudioUnitInitialize. The current code is asking for an uninitialized value!
  2. As I mentioned, the audioDeviceIOCallback documentation already says that you should “make sure your code can cope with reasonable changes in the buffer size from one callback to the next”. The value returned by currentBufferSizeSamples should be the common case, in this case when AVAudioSession says, NOT the worst case.

When talking about total latency, the latency returned by the latency getters in AVAudioSession needs to be taken into account in addition to the latency that is caused by buffering. The buffering always adds latency on top of that - the bigger the buffer size, the longer the latency.

For me it seems to change only if I’ve tried setting it before. But I agree it would be safer after the initialize.

Yes, I’ve meant to bug Jules about this documentation. It’s the same for prepareToPlay method which we have recently updated to be more specific, i.e. that it’s almost always the maximum buffer size - not the common case.

I assumed the latency returned by AVAudioSession would take at least the HAL buffer into account. Need to recheck this… however, there is also [AVAudioSession sharedInstance].IOBufferDuration. I always assumed that this would return the “common” buffer size as this is always smaller than kAudioUnitProperty_MaximumFramesPerSlice. I guess we need a way to report the IOBufferDuration value tp the user.

Any news on this? I still strongly believe that the block size should be the default case, and the maximum size of intermittent larger chunks may be reported by some other function, but don’t need to, as per the documentation of audioDeviceIOCallback.

I’ve looked into it and come to the conclusion that this is a Juce bug. If you do this on iOS:

AudioDeviceManager::AudioDeviceSetup desired;
desired.bufferSize = 256; // Or any number you want
audioDeviceManager.setAudioDeviceSetup(desired);

AudioDeviceManager::AudioDeviceSetup actual;
audioDeviceManager.getAudioDeviceSetup(actual);

Then actual.bufferSize is not what you’ve just set. It always ends up whatever the value of kAudioUnitProperty_MaximumFramesPerSlice is (1156 for me). This is not because of iOS: setPreferredIOBufferDuration accepts 256. This is what I’ve observed:

  1. setAudioDeviceSetup calls iOSAudioIODevice::open, which calls updateCurrentBufferSize, which sets the buffer size on the AVAudioSession successfully. So far so good.
  2. iOSAudioIODevice::open calls handleRouteChange, which calls createAudioUnit.
    • It creates a new audio unit, but doesn’t set the kAudioUnitProperty_MaximumFramesPerSlice property. The Apple Documentation tells you that you should configure this property:

      Your application should always configure and honor this property.

    • actualBufferSize is set to the value of the value of the kAudioUnitProperty_MaximumFramesPerSlice property. Here, whatever the user has set before gets overwritten.
  3. setAudioDeviceSetup sets currentSetup.bufferSize to the value that has just been overwritten.

The part that sets the latency using setPreferredIOBufferDuration is correct. It works in Juce and in other iOS apps. However, storing the MaximumFramesPerSlice in the actualBufferSize is wrong, if you haven’t set the property to actualBufferSize before. So I think this is a Juce bug.

So the value isn’t 1156 because setPreferredIOBufferDuration decided to use something else than 256. It’s that value because it gets assigned by Juce.

OK. This should be fixed on the develop branch now. Thanks for looking into this.

Wow, that was fast. Thank you :slight_smile:

Just a head-ups, I’ve changed the iOS block size code once again. I now probe the block size after AudioUnitInitialize as suggested above. This seems to work much better.

Hello Fabian,

I’ve encountered the same issue with the latency but the fix didn’t work for me completely.
I’m using 4.2.1 version of Juce. I initialize the audiodevice using a XML backup of the audiodevice status. When I store the audiodevice status, the audio buffer size is set to 1156 instead of the buffer size I choose (consistant with this thread issue).

I first switched to the developper branch to get your fix. Once build and started my app, I had no sound at all. I had to change the buffersize manualy in my software. Afterwards the app worked fine. I stopped and started the App again, the buffersize displayed is the one stored in the XML file and also the one I choosed but still no sound. I have to force another buffersize.

I switched back to the standard 4.2.1 version and inserted the modification of juce_ios_Audio.cpp in the createAudioUnit method. I got the same behavior.

Any idea ?

Thanks

With 4.2.0 until the current development branch I’ve no audio output on iOS with my old code (4.1.0). It only works if another app is using the audio card too. Whats wrong?

Edit: But setting block size and sample rate is working now :slight_smile:

The missing audio output could be a performance problem, not sure for now. Have just old test devices here and my synth eats a lot.

But, there is something more wrong.
With a sample rate of 44kHz and a blocksize of 1024 samples I get a callback for 1024 samples after about 21ms - if I change the sample rate to 22k then I get the callback for 1024 samples after the same time - but it should be about 42ms. So it seems not really change the sample rate - block size seems ok.

Please take a look on this - I’m “here” for testing if you need.

Same sample rate problems with the JUCE demo app on iOS (iOS 9) - reducing the sample rate just pitches the test sound in the audio device settings - but does not change the sample rate.