Audiobus

Hi Jules & All

I worked through the AVAudioSession stuff and got AudioBus working with the Juce demo. It’s a bit cobbled together (I’d never written a line of Objective C before last week) but it’s a start and it works!

In order to do this, iOSAudioIODevice creates an instance of an Objective C class which has some methods for working with the audio session, adding Audiobus ports etc. This Obj C class can also register to receive route change notifications from AVAudioSession (the new way of doing this in iOS 7+).  

The problem I’m having is how to send route change notifications from the Obj C class back to its C++ parent. I can’t make it a friend class, so is there any way to do this without having to make iOSAudioIODevice::routingChanged() a public method? 

Or am I otherwise going about this all wrong?

Nick

There's no harm in making that a public method - the whole class is internal to the library, it's not public as far as users are concerned!

Ok, I'll do that then.

What is the purpose of the routingChangedStatic() method? I don't really understand why that was needed for "the old way," but I'm assuming I can totally bypass that now and just call routingChanged directly from my Obj C class. 

It was a callback. Callbacks can't be non-static functions.

Ok. Makes sense.

Do you happen to remember why you are creating a new AudioUnit whenever the routing changes on iOS? It doesn't seem like this is necessary (and in fact it's causing me problems with Audiobus). Right now I can totally bypass all of the route change code you have and everything still seems to work fine when changing devices (for example plugging in headphones). So I'm not sure what I'm missing here ...

 

I really can't remember - it's been like that for a long time. And TBH I've mostly relied on other people to find the edge-case problems with the iOS audio, so it was probably done like that in response to someone else reporting a problem.

Has any more progress been made on this?  I would like to support Audiobus as well.  Also, is a rewrite of the iOS audio device actually necessary?

I actually have audiobus running right now on iOS7, but it's a bit of a hack. I will try to clean up my code so I can share it here soon. Mainly right now my priority is to get it running on iOS8 first, which is blocked by some other issues.  

As far as I can tell, the simplest way to support audiobus is to rewrite the iOS audio device code. The main work is actually updating the juce audio device to use AVAudioSession (as was suggested earlier in the thread) because there is a bunch of deprecated code there. Once that's sorted, The Audiobus part is relatively easy, you just have to have access to the AudioUnit that Juce creates. So I've done it by doing all of the Audiobus initialization etc from within the iOS audio device code. It's very messy because there's a bunch of objective c++ and objective c wrapper involved, etc. It's pretty ugly :)

Ok, hoping this will help out any of you trying to get Audiobus running. This is a rewrite of juce_ios_audio.cpp which uses AVAudioSession and adds Audiobus support. This gives you Interapp Audio for free. 

Because of the Obj C++ nonsense, I had to split the file up into a .h and .mm -- couldn't figure out a nicer way to do this. It's ugly but it works. This would be a bitch to diff with the original code, so I left a lot of the old AudioSession code here (commented out) so that it will be easy to see where this was replaced with AVAudioSession calls.  

NOTE:  If you want to get Audiobus running you are going to also need to update your project's .plist and register for an API key, etc. Please read Audiobus documentation for details. There's far too much for me to explain here.

Good luck (you may need it!) :)

-nick

 

.h

//

//  juce_ios_Audio.h

//

//

//


#ifndef JuceDemo_juce_ios_Audio_h

#define JuceDemo_juce_ios_Audio_h


class iOSAudioIODevice  : public AudioIODevice

{

    

public:

    

    iOSAudioIODevice (const String& deviceName);

    ~iOSAudioIODevice();

    

    StringArray getOutputChannelNames() override;

    StringArray getInputChannelNames() override;

    

    Array<double> getAvailableSampleRates() override;

    

    Array<int> getAvailableBufferSizes() override;

    

    int getDefaultBufferSize() override;

    

    String open (const BigInteger& inputChannelsWanted,

                 const BigInteger& outputChannelsWanted,

                 double targetSampleRate, int bufferSize) override;

    

    void close() override;

    

    bool isOpen() override;

    

    int getCurrentBufferSizeSamples() override;

    double getCurrentSampleRate() override;

    int getCurrentBitDepth() override;

    

    BigInteger getActiveOutputChannels() const override;

    BigInteger getActiveInputChannels() const override;

    

    int getOutputLatencyInSamples() override;

    int getInputLatencyInSamples() override;

    

    //int getLatency (AudioSessionPropertyID propID);

    

    void start (AudioIODeviceCallback* newCallback) override;

    

    void stop() override;

    

    bool isPlaying() override;

    String getLastError() override;

    

    bool setAudioPreprocessingEnabled (bool enable) override;

    

    void routingChanged (const NSNotification* notification);

    

private:

    //==================================================================================================

    

    void* wrapper; //Objective C class for recieving notifications from AVAudioSession

    

    AVAudioSession* avAudioSession; //the shared AVAudioSession

    NSError* err;

    

    CriticalSection callbackLock;

    Float64 sampleRate;

    int numInputChannels, numOutputChannels;

    int preferredBufferSize, actualBufferSize;

    bool isRunning;

    String lastError;

    

    AudioStreamBasicDescription format;

    AudioUnit audioUnit;

    UInt32 audioInputIsAvailable;

    AudioIODeviceCallback* callback;

    BigInteger activeOutputChans, activeInputChans;

    

    AudioSampleBuffer floatData;

    float* inputChannels[3];

    float* outputChannels[3];

    bool monoInputChannelNumber, monoOutputChannelNumber;

    

    void prepareFloatBuffers (int bufferSize);

    

    //==================================================================================================

    OSStatus process (AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,

                      const UInt32 numFrames, AudioBufferList* data);

    

    void updateDeviceInfo();

    

    void updateCurrentBufferSize();

    

    //==================================================================================================

    

    struct AudioSessionHolder

    {

        AudioSessionHolder()

        {

            //AudioSessionInitialize (0, 0, interruptionListenerCallback, this);

            //don't think this has to be done for AVAudioSession

        }

        

        static void interruptionListenerCallback (void* client, UInt32 interruptionType)

        {

            const Array <iOSAudioIODevice*>& activeDevices = static_cast <AudioSessionHolder*> (client)->activeDevices;

            

            for (int i = activeDevices.size(); --i >= 0;)

                activeDevices.getUnchecked(i)->interruptionListener (interruptionType);

        }

        

        Array <iOSAudioIODevice*> activeDevices;

    };


    static AudioSessionHolder& getSessionHolder()

    {

        static AudioSessionHolder audioSessionHolder;

        return audioSessionHolder;

    }

    

    void interruptionListener (const UInt32 interruptionType);

    

    //==================================================================================================

    static OSStatus processStatic (void* client, AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,

                                   UInt32 /*busNumber*/, UInt32 numFrames, AudioBufferList* data)

    {

        return static_cast<iOSAudioIODevice*> (client)->process (flags, time, numFrames, data);

    }

    

    // Routing changes have to be handled differently with AVAudioSession. However, I can't seem to find any reason that we actually need to respond to routing changes here, so haven't taken a stab at it.

    

    //    static void routingChangedStatic (void* client, AudioSessionPropertyID, UInt32 /*inDataSize*/, const void* propertyValue)

    //    {

    //        static_cast<iOSAudioIODevice*> (client)->routingChanged (propertyValue);

    //    }

    

    //==================================================================================================

    

    void resetFormat (const int numChannels) noexcept;

    

    bool createAudioUnit();

    

    // all of this is unnecessary with AVAudioSession as far as I can see:

    

    // If the routing is set to go through the receiver (i.e. the speaker, but quiet), this re-routes it

    // to make it loud. Needed because by default when using an input + output, the output is kept quiet.

    //static void fixAudioRouteIfSetToReceiver();

    

    //void fixAudioRouteIfSetToReceiver();

    

//    template <typename Type>

//    static OSStatus getSessionProperty (AudioSessionPropertyID propID, Type& result) noexcept

//    {

//        UInt32 valueSize = sizeof (result);

//        return AudioSessionGetProperty (propID, &valueSize, &result);

//    }

    

//    static bool setSessionUInt32Property  (AudioSessionPropertyID propID, UInt32  v) noexcept  { AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }

//    static bool setSessionFloat32Property (AudioSessionPropertyID propID, Float32 v) noexcept  { AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }

//    static bool setSessionFloat64Property (AudioSessionPropertyID propID, Float64 v) noexcept  { AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }

    

    JUCE_DECLARE_NON_COPYABLE (iOSAudioIODevice)

};


#endif


.mm

(also, after making these changes, make sure to update juce_audio_devices.cpp to include "native/juce_ios_Audio.mm" instead of "native/juce_ios_Audio.cpp")

#define AUDIOBUS


#include "juce_ios_Audio.h"

#import <AVFoundation/AVAudioSession.h>

#import <AudioToolbox/AudioToolbox.h>


} // juce namespace


#ifdef AUDIOBUS

#import "Audiobus.h"

#include "AudiobusStatus.h"

#endif


// Objective C class with some methods for working with AVAudioSession and Audiobus.

// This holds a pointer to its parent so we can send notifications to it.


@interface Wrapper : NSObject

{

    juce::iOSAudioIODevice* owner;

    

    #ifdef AUDIOBUS

    ABSenderPort *audiobusOutput;

    #endif

}


#ifdef AUDIOBUS

@property (readonly) ABSenderPort* audiobusOutput;

@property (strong, nonatomic) ABAudiobusController* audiobusController;

#endif


- (void)registerForRouteChangeNotification;


-(void)observeValueForKeyPath:(NSString *)keyPath

                     ofObject:(id)object

                       change:(NSDictionary *)change

                      context:(void *)context;


- (void)routeChange:(NSNotification*)notification;

- (void)activateAudiobus:(AudioUnit)outputUnit;


@end


//-------------


static void * kAudiobusRunningOrConnectedChanged = &kAudiobusRunningOrConnectedChanged;


@implementation Wrapper


#ifdef AUDIOBUS

@synthesize audiobusOutput;

@synthesize audiobusController;

#endif


- (id) initWithOwner: (juce::iOSAudioIODevice*) owner_

{

    if ((self = [super init]) != nil)

    {

        owner = owner_;

    };

    

    return self;

}


- (void)dealloc {

    

    #ifdef AUDIOBUS

    [audiobusController removeObserver:self forKeyPath:@"connected"];

    [audiobusController removeObserver:self forKeyPath:@"audiobusAppRunning"];

    #endif

    [super dealloc];

}


- (void) registerForRouteChangeNotification {

    [[NSNotificationCenter defaultCenter] addObserver:self

                                             selector:@selector(routeChange:)

                                                 name:AVAudioSessionRouteChangeNotification

                                               object:nil];

}


- (void)routeChange:(NSNotification*)notification {

    

    // It doesn't appear Juce needs to do anything with routing changes, so I haven't bothered with this yet:

    //owner->routingChanged (notification);

}



-(void)observeValueForKeyPath:(NSString *)keyPath

                     ofObject:(id)object

                       change:(NSDictionary *)change

                      context:(void *)context {

    

    #ifdef AUDIOBUS

    

    if ( context == kAudiobusRunningOrConnectedChanged ) {

        

        //I created this AudiobusStatus singleton so that I can easily check audiobus' status from elsewhere in the program. I need to check the Audiobus connection status when going into the background for example. Could probably find a more elegant way to do this.

        AudiobusStatus* statusObject = AudiobusStatus::getInstance();

        

        statusObject->setConnected (audiobusController.audiobusConnected);

        statusObject->setRunning (audiobusController.audiobusAppRunning);

        

        //just testing

        if (!audiobusController.audiobusAppRunning) {

            // Audiobus has quit. Time to sleep.

            NSLog(@"Audiobus app has closed");

        }

        

        if(!audiobusController.audiobusConnected) {

            NSLog(@"App disconnected from Audiobus");

        }

    }

    

    else

    {

        [super observeValueForKeyPath:keyPath ofObject:object change:change context:context];

    }

    #endif

}


- (void)activateAudiobus:(AudioUnit)outputUnit;

{

    #ifdef AUDIOBUS

    

    self.audiobusController = [[ABAudiobusController alloc] initWithApiKey:@"xxxxx"]; //use your API key here

    

    self.audiobusController.connectionPanelPosition = ABConnectionPanelPositionRight; //choose where you want the Audiobus navigation widget to show up

    

    // port information needs to match the information entered into the .plist (see audiobus integration guide)

    ABSenderPort *sender = [[ABSenderPort alloc] initWithName:@"Main Output"

                                                        title:NSLocalizedString(@"Main Output", @"")

                                    audioComponentDescription:(AudioComponentDescription) {

                                        .componentType = kAudioUnitType_RemoteGenerator,

                                        .componentSubType = 'aout', // Note single quotes

                                        .componentManufacturer = 'juce' } //

                                                    audioUnit:outputUnit];

    [audiobusController addSenderPort:sender];

    

    //would create filter or input ports here if I needed them

    

    // Watch the audiobusAppRunning and connected properties

    [audiobusController addObserver:self

                         forKeyPath:@"connected"

                            options:0

                            context:kAudiobusRunningOrConnectedChanged];

    

    [audiobusController addObserver:self

                         forKeyPath:@"audiobusAppRunning"

                            options:0

                            context:kAudiobusRunningOrConnectedChanged];

    

    #endif

}


@end


namespace juce {



iOSAudioIODevice::iOSAudioIODevice (const String& deviceName)

: AudioIODevice (deviceName, "Audio"),

actualBufferSize (0),

isRunning (false),

audioUnit (0),

callback (nullptr),

floatData (1, 2)

{

    Wrapper* newWrapper = [[Wrapper alloc] initWithOwner: this];

    [newWrapper retain];

    wrapper = newWrapper;

    

    avAudioSession = [AVAudioSession sharedInstance];

    

    getSessionHolder().activeDevices.add (this);

    

    numInputChannels = 2;

    numOutputChannels = 2;

    preferredBufferSize = 0;

    

    updateDeviceInfo();

}


iOSAudioIODevice::~iOSAudioIODevice()

{

    getSessionHolder().activeDevices.removeFirstMatchingValue (this);

    close();

}


StringArray iOSAudioIODevice::getOutputChannelNames()

{

StringArray s;

s.add ("Left");

s.add ("Right");

return s;

}


StringArray iOSAudioIODevice::getInputChannelNames()

{

StringArray s;

if (audioInputIsAvailable)

{

    s.add ("Left");

    s.add ("Right");

}

return s;

}


Array<double> iOSAudioIODevice::getAvailableSampleRates()

{

// can't find a good way to actually ask the device for which of these it supports..

static const double rates[] = { 8000.0, 16000.0, 22050.0, 32000.0, 44100.0, 48000.0 };

return Array<double> (rates, numElementsInArray (rates));

}


Array<int> iOSAudioIODevice::getAvailableBufferSizes()

{

Array<int> r;


for (int i = 6; i < 12; ++i)

r.add (1 << i);


return r;

}


int iOSAudioIODevice::getDefaultBufferSize()         { return 1024; }


String iOSAudioIODevice::open (const BigInteger& inputChannelsWanted,

             const BigInteger& outputChannelsWanted,

             double targetSampleRate, int bufferSize)

{

    close();


    lastError.clear();

    preferredBufferSize = (bufferSize <= 0) ? getDefaultBufferSize() : bufferSize;


    //  xxx set up channel mapping


    activeOutputChans = outputChannelsWanted;

    activeOutputChans.setRange (2, activeOutputChans.getHighestBit(), false);

    numOutputChannels = activeOutputChans.countNumberOfSetBits();

    monoOutputChannelNumber = activeOutputChans.findNextSetBit (0);


    activeInputChans = inputChannelsWanted;

    activeInputChans.setRange (2, activeInputChans.getHighestBit(), false);

    numInputChannels = activeInputChans.countNumberOfSetBits();

    monoInputChannelNumber = activeInputChans.findNextSetBit (0);

    

    //set the audio session category before making the session active

    

    if (numInputChannels > 0 && audioInputIsAvailable)

    {

//        setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_PlayAndRecord);

//        setSessionUInt32Property (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, 1);

        

        [avAudioSession setCategory: AVAudioSessionCategoryPlayAndRecord

                    withOptions: AVAudioSessionCategoryOptionMixWithOthers | AVAudioSessionCategoryOptionDefaultToSpeaker |AVAudioSessionCategoryOptionAllowBluetooth

                    error:  &err];

    }

    else

    {

        //setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);

        [avAudioSession setCategory: AVAudioSessionCategoryPlayback

                 withOptions: AVAudioSessionCategoryOptionMixWithOthers

                 error:  &err];

    }

    

    [avAudioSession setActive: YES error:  &err];

    

    //route changes must now be handled by registering for a route change notification. however so far I haven't found any reason we need to actually do this here, so I'm not doing anything with these notifications


    //AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);

    [(id)wrapper registerForRouteChangeNotification];


    // this should no longer be necessary since we already set AVAudioSessionCategoryOptionDefaultToSpeaker above

    //fixAudioRouteIfSetToReceiver();


    //setSessionFloat64Property (kAudioSessionProperty_PreferredHardwareSampleRate, targetSampleRate);

    [avAudioSession setPreferredSampleRate:targetSampleRate error:&err];


    updateDeviceInfo();


    //setSessionFloat32Property (kAudioSessionProperty_PreferredHardwareIOBufferDuration, preferredBufferSize / sampleRate);

    [avAudioSession setPreferredIOBufferDuration: preferredBufferSize/sampleRate error: &err];

    

    updateCurrentBufferSize();


    prepareFloatBuffers (actualBufferSize);


    isRunning = true;

    routingChanged (nullptr);  // creates and starts the AU


    lastError = audioUnit != 0 ? "" : "Couldn't open the device";

    return lastError;

}


void iOSAudioIODevice::close()

{

if (isRunning)

{

    isRunning = false;

    

    //setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);

    

    [avAudioSession setCategory: AVAudioSessionCategoryPlayback

             withOptions: AVAudioSessionCategoryOptionMixWithOthers

                   error:  &err];

    

    //AudioSessionRemovePropertyListenerWithUserData (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);

    

    [(id)wrapper release];

    

    if (audioUnit != 0)

    {

        AudioComponentInstanceDispose (audioUnit);

        audioUnit = 0;

    }

    

    //AudioSessionSetActive (false);

    [avAudioSession setActive: NO error:  &err];


}

}


bool iOSAudioIODevice::isOpen()                       { return isRunning; }


int iOSAudioIODevice::getCurrentBufferSizeSamples()   { return actualBufferSize; }

double iOSAudioIODevice::getCurrentSampleRate()       { return sampleRate; }

int iOSAudioIODevice::getCurrentBitDepth()            { return 16; }


BigInteger iOSAudioIODevice::getActiveOutputChannels() const    { return activeOutputChans; }

BigInteger iOSAudioIODevice::getActiveInputChannels() const     { return activeInputChans; }


int iOSAudioIODevice::getOutputLatencyInSamples()

{

    //return getLatency (kAudioSessionProperty_CurrentHardwareOutputLatency);

    double latency = avAudioSession.outputLatency;

    return roundToInt (latency * getCurrentSampleRate());

}

    

int iOSAudioIODevice::getInputLatencyInSamples()

{

    //return getLatency (kAudioSessionProperty_CurrentHardwareInputLatency);

    double latency = avAudioSession.inputLatency;

    return roundToInt (latency * getCurrentSampleRate());

}


//int iOSAudioIODevice::getLatency (AudioSessionPropertyID propID)

//{

//    Float32 latency = 0;

//    getSessionProperty (propID, latency);

//    return roundToInt (latency * getCurrentSampleRate());

//}


void iOSAudioIODevice::start (AudioIODeviceCallback* newCallback)

{

if (isRunning && callback != newCallback)

{

    if (newCallback != nullptr)

        newCallback->audioDeviceAboutToStart (this);

        

        const ScopedLock sl (callbackLock);

        callback = newCallback;

        }

}


void iOSAudioIODevice::stop()

{

if (isRunning)

{

    AudioIODeviceCallback* lastCallback;

    

    {

        const ScopedLock sl (callbackLock);

        lastCallback = callback;

        callback = nullptr;

    }

    

    if (lastCallback != nullptr)

        lastCallback->audioDeviceStopped();

        }

}


bool iOSAudioIODevice::isPlaying()            { return isRunning && callback != nullptr; }

String iOSAudioIODevice::getLastError()       { return lastError; }


bool iOSAudioIODevice::setAudioPreprocessingEnabled (bool enable)

{

//    return setSessionUInt32Property (kAudioSessionProperty_Mode, enable ? kAudioSessionMode_Default

//                                 : kAudioSessionMode_Measurement);

    

    return [avAudioSession setMode: enable ? AVAudioSessionModeDefault : AVAudioSessionModeMeasurement

                           error:  &err];

}

    

void iOSAudioIODevice::routingChanged (const NSNotification* notification)

{

    if (! isRunning)

        return;

        

    if (notification != nullptr)

    {

//        CFDictionaryRef routeChangeDictionary = (CFDictionaryRef) propertyValue;

//        CFNumberRef routeChangeReasonRef = (CFNumberRef) CFDictionaryGetValue (routeChangeDictionary,

//                                                                                CFSTR (kAudioSession_AudioRouteChangeKey_Reason));

//        

//        SInt32 routeChangeReason;

//        CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason);

//            

//        if (routeChangeReason == kAudioSessionRouteChangeReason_OldDeviceUnavailable)

//        {

//            const ScopedLock sl (callbackLock);

//                

//            if (callback != nullptr)

//                callback->audioDeviceError ("Old device unavailable");

//        }

        

        //again, not doing anything here, but if you wanted to:

        

        NSDictionary *routeChangeDict = notification.userInfo;

        

        NSInteger routeChangeReason = [[routeChangeDict valueForKey:AVAudioSessionRouteChangeReasonKey] integerValue];

        

        switch (routeChangeReason) {

            case AVAudioSessionRouteChangeReasonUnknown:

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonUnknown");

                break;

                

            case AVAudioSessionRouteChangeReasonNewDeviceAvailable:

                // a headset was added or removed

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonNewDeviceAvailable");

                break;

                

            case AVAudioSessionRouteChangeReasonOldDeviceUnavailable:

                // a headset was added or removed

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonOldDeviceUnavailable");

                break;

                

            case AVAudioSessionRouteChangeReasonCategoryChange:

                // called at start - also when other audio wants to play

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonCategoryChange");//AVAudioSessionRouteChangeReasonCategoryChange

                break;

                

            case AVAudioSessionRouteChangeReasonOverride:

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonOverride");

                break;

                

            case AVAudioSessionRouteChangeReasonWakeFromSleep:

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonWakeFromSleep");

                break;

                

            case AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory:

                NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory");

                break;

                

            default:

                break;

        }

        

        if (routeChangeReason == AVAudioSessionRouteChangeReasonOldDeviceUnavailable)

        {

            const ScopedLock sl (callbackLock);

            

            if (callback != nullptr)

                callback->audioDeviceError ("Old device unavailable");

        }

    }

        

    updateDeviceInfo();

    createAudioUnit();

        

    //AudioSessionSetActive (true);

    [avAudioSession setActive: YES error:&err];

    

    if (audioUnit != 0)

    {

        UInt32 formatSize = sizeof (format);

        AudioUnitGetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, &formatSize);

            

        updateCurrentBufferSize();

        AudioOutputUnitStart (audioUnit);

    }

}


//private functions

//---------------------


void iOSAudioIODevice::prepareFloatBuffers (int bufferSize)

{

    if (numInputChannels + numOutputChannels > 0)

    {

        floatData.setSize (numInputChannels + numOutputChannels, bufferSize);

        zeromem (inputChannels, sizeof (inputChannels));

        zeromem (outputChannels, sizeof (outputChannels));

        

        for (int i = 0; i < numInputChannels; ++i)

            inputChannels[i] = floatData.getWritePointer (i);

        

        for (int i = 0; i < numOutputChannels; ++i)

            outputChannels[i] = floatData.getWritePointer (i + numInputChannels);

    }

}


//==================================================================================================

OSStatus iOSAudioIODevice::process (AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,

                  const UInt32 numFrames, AudioBufferList* data)

{

    OSStatus err = noErr;

    

    if (audioInputIsAvailable && numInputChannels > 0)

        err = AudioUnitRender (audioUnit, flags, time, 1, numFrames, data);

    

    const ScopedLock sl (callbackLock);

    

    if (callback != nullptr)

    {

        if ((int) numFrames > floatData.getNumSamples())

            prepareFloatBuffers ((int) numFrames);

        

        if (audioInputIsAvailable && numInputChannels > 0)

        {

            short* shortData = (short*) data->mBuffers[0].mData;

            

            if (numInputChannels >= 2)

            {

                for (UInt32 i = 0; i < numFrames; ++i)

                {

                    inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);

                    inputChannels[1][i] = *shortData++ * (1.0f / 32768.0f);

                }

            }

            else

            {

                if (monoInputChannelNumber > 0)

                    ++shortData;

                

                for (UInt32 i = 0; i < numFrames; ++i)

                {

                    inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);

                    ++shortData;

                }

            }

        }

        else

        {

            for (int i = numInputChannels; --i >= 0;)

                zeromem (inputChannels[i], sizeof (float) * numFrames);

        }

        

        callback->audioDeviceIOCallback ((const float**) inputChannels, numInputChannels,

                                         outputChannels, numOutputChannels, (int) numFrames);

        

        short* shortData = (short*) data->mBuffers[0].mData;

        int n = 0;

        

        if (numOutputChannels >= 2)

        {

            for (UInt32 i = 0; i < numFrames; ++i)

            {

                shortData [n++] = (short) (outputChannels[0][i] * 32767.0f);

                shortData [n++] = (short) (outputChannels[1][i] * 32767.0f);

            }

        }

        else if (numOutputChannels == 1)

        {

            for (UInt32 i = 0; i < numFrames; ++i)

            {

                const short s = (short) (outputChannels[monoOutputChannelNumber][i] * 32767.0f);

                shortData [n++] = s;

                shortData [n++] = s;

            }

        }

        else

        {

            zeromem (data->mBuffers[0].mData, 2 * sizeof (short) * numFrames);

        }

    }

    else

    {

        zeromem (data->mBuffers[0].mData, 2 * sizeof (short) * numFrames);

    }

    

    return err;

}


void iOSAudioIODevice::updateDeviceInfo()

{

//    getSessionProperty (kAudioSessionProperty_CurrentHardwareSampleRate, sampleRate);

//    getSessionProperty (kAudioSessionProperty_AudioInputAvailable, audioInputIsAvailable);

    

    sampleRate = avAudioSession.sampleRate;

    audioInputIsAvailable = avAudioSession.inputAvailable;

}


void iOSAudioIODevice::updateCurrentBufferSize()

{

    Float32 bufferDuration = sampleRate > 0 ? (Float32) (preferredBufferSize / sampleRate) : 0.0f;

    //getSessionProperty (kAudioSessionProperty_CurrentHardwareIOBufferDuration, bufferDuration);

    bufferDuration = avAudioSession.IOBufferDuration;

    actualBufferSize = (int) (sampleRate * bufferDuration + 0.5);

}



//==================================================================================================


void  iOSAudioIODevice::interruptionListener (const UInt32 interruptionType)

{

    if (interruptionType == kAudioSessionBeginInterruption)

    {

        isRunning = false;

        AudioOutputUnitStop (audioUnit);

        //AudioSessionSetActive (false);

        [avAudioSession setActive: NO error:&err];

        

        const ScopedLock sl (callbackLock);

        

        if (callback != nullptr)

            callback->audioDeviceError ("iOS audio session interruption");

    }

    

    if (interruptionType == kAudioSessionEndInterruption)

    {

        isRunning = true;

        //AudioSessionSetActive (true);

        [avAudioSession setActive: YES error:&err];

        

        AudioOutputUnitStart (audioUnit);

        

        const ScopedLock sl (callbackLock);

        

        if (callback != nullptr)

            callback->audioDeviceError ("iOS audio session resumed");

    }

}


//==================================================================================================


void iOSAudioIODevice::resetFormat (const int numChannels) noexcept

{

    zerostruct (format);

    format.mFormatID = kAudioFormatLinearPCM;

    format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;

    format.mBitsPerChannel = 8 * sizeof (short);

    format.mChannelsPerFrame = (UInt32) numChannels;

    format.mFramesPerPacket = 1;

    format.mBytesPerFrame = format.mBytesPerPacket = (UInt32) numChannels * sizeof (short);

}


bool iOSAudioIODevice::createAudioUnit()

{

    if (audioUnit != 0)

    {

        AudioComponentInstanceDispose (audioUnit);

        audioUnit = 0;

    }

    

    resetFormat (2);

    

    AudioComponentDescription desc;

    desc.componentType = kAudioUnitType_Output;

    desc.componentSubType = kAudioUnitSubType_RemoteIO;

    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    desc.componentFlags = 0;

    desc.componentFlagsMask = 0;

    

    AudioComponent comp = AudioComponentFindNext (0, &desc);

    AudioComponentInstanceNew (comp, &audioUnit);

    

    if (audioUnit == 0)

        return false;

    

    if (numInputChannels > 0)

    {

        const UInt32 one = 1;

        AudioUnitSetProperty (audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof (one));

    }

    

    {

        AudioChannelLayout layout;

        layout.mChannelBitmap = 0;

        layout.mNumberChannelDescriptions = 0;

        layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;

        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Input,  0, &layout, sizeof (layout));

        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Output, 0, &layout, sizeof (layout));

    }

    

    {

        AURenderCallbackStruct inputProc;

        inputProc.inputProc = processStatic;

        inputProc.inputProcRefCon = this;

        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &inputProc, sizeof (inputProc));

    }

    

    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,  0, &format, sizeof (format));

    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, sizeof (format));

    

    AudioUnitInitialize (audioUnit);

    

    [(id)wrapper activateAudiobus: audioUnit];

    

    return true;

}

    

//void iOSAudioIODevice::fixAudioRouteIfSetToReceiver()

//{

//    CFStringRef audioRoute = 0;

//    if (getSessionProperty (kAudioSessionProperty_AudioRoute, audioRoute) == noErr)

//    {

//        NSString* route = (NSString*) audioRoute;

//            

//        //DBG ("audio route: " + nsStringToJuce (route));

//            

//        if ([route hasPrefix: @"Receiver"])

//            setSessionUInt32Property (kAudioSessionProperty_OverrideAudioRoute, kAudioSessionOverrideAudioRoute_Speaker);

//            

//        CFRelease (audioRoute);

//    }

//}

    



//==============================================================================

class iOSAudioIODeviceType  : public AudioIODeviceType

{

public:

    iOSAudioIODeviceType()  : AudioIODeviceType ("iOS Audio") {}


    void scanForDevices() {}

    StringArray getDeviceNames (bool /*wantInputNames*/) const       { return StringArray ("iOS Audio"); }

    int getDefaultDeviceIndex (bool /*forInput*/) const              { return 0; }

    int getIndexOfDevice (AudioIODevice* d, bool /*asInput*/) const  { return d != nullptr ? 0 : -1; }

    bool hasSeparateInputsAndOutputs() const                         { return false; }


    AudioIODevice* createDevice (const String& outputDeviceName, const String& inputDeviceName)

    {

        if (outputDeviceName.isNotEmpty() || inputDeviceName.isNotEmpty())

            return new iOSAudioIODevice (outputDeviceName.isNotEmpty() ? outputDeviceName

                                                                       : inputDeviceName);


        return nullptr;

    }


private:

    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (iOSAudioIODeviceType)

};


//==============================================================================

AudioIODeviceType* AudioIODeviceType::createAudioIODeviceType_iOSAudio()

{

    return new iOSAudioIODeviceType();

}

 

Hi ndika,

can you post the code of AudiobusStatus too? Like to check your stuff. :-)

Cheers

Great Job ndika! Got it working too.

A quick overview what i have done to get it working.

------------------------------------------------------------------

0. download AudioBus SDK (register required)

1. add extra frameworks in Introjucer: AVFoundation, Security

2. add the AudioBus headers in Introjucer

3. link: libAudiobus.a

4. add ndika's implementation native/juce_ios_Audio.mm and native/juce_ios_Audio.h (http://www.juce.com/forum/topic/audiobus?page=2#comment-308106)
5. replace in juce_audio_devices.mm

#include "native/juce_ios_Audio.cpp"

with

#include "native/juce_ios_Audio.mm"


6. comment out in ndika's native/juce_ios_Audio.mm

// #include "AudiobusStatus.h"
...
// AudiobusStatus* statusObject = AudiobusStatus::getInstance();
// statusObject->setConnected (audiobusController.audiobusConnected);
// statusObject->setRunning (audiobusController.audiobusAppRunning);

7. enable IAA in your XCode project (if not already done) (audio bg will be enabled via point 8)

8. edit plist in Introjucer (or XCode)

NOTE: 4YOU and 4TYP has to be 4 letters long!
NOTE: aurg (recommended) (see: https://developer.audiob.us/doc/_integration-_guide.html#Create-Sender-Port)
NOTE: YOU or YOUR is your Company
NOTE: APP you should replace with your app name

<plist version="1.0">
<dict>
    <key>CFBundleDisplayName</key>
    <string>APP</string>
    <key>AudioComponents</key>
    <array>
        <dict>
            <key>manufacturer</key>
            <string>4YOU</string>
            <key>name</key>
            <string>YOUR: App</string>
            <key>subtype</key>
            <string>4TYP</string>
            <key>type</key>
            <string>aurg</string>
            <key>version</key>
            <integer>1</integer>
        </dict>
    </array>
    <array>
        <dict>
            <key>CFBundleTypeRole</key>
            <string>Editor</string>
            <key>CFBundleURLName</key>
            <string>com.YOUR.APP</string>
            <key>CFBundleURLSchemes</key>
            <array>
                <string>App.audiobus</string>
            </array>
        </dict>
    </array>
    <key>UIBackgroundModes</key>
    <array>
        <string>audio</string>
    </array>
</dict>
</plist>

9. compile your app (is not working - you will get errors (code 1))

10. Register your APP @ AudioBus: https://developer.audiob.us/doc/_integration-_guide.html#Register-App (use the plist from your compiled app!)

11. wait for AudioBus response (a few hours on saturday)

12. edit ndika's native/juce_ios_Audio.mm

self.audiobusController = [[ABAudiobusController alloc] initWithApiKey:@"THIS KEY YOU HAVE RECEIVED FROM AUDIOBUS"]; //use your API key here

ABSenderPort *sender = [[ABSenderPort alloc] initWithName:@"YOU: App"
                                             title:NSLocalizedString(@"YOU: AudioBus", @"")
audioComponentDescription:(AudioComponentDescription) {
  .componentType = kAudioUnitType_RemoteGenerator,
  .componentSubType = '4TYP',                             
  .componentManufacturer = '4YOU' } //
  audioUnit:outputUnit];

13. compilecopile again.

Your app should work with AudioBus now.

Please correct me if something is wrong or I've forgotten something.

 

Thanks ndika!!!

 

EDIT: Add CFBundleDisplayName to plist.

Thanks to both of you! You saved me a lot of time and hair loss. Works great.

I was experiencing a crash issue. When using the AudioDeviceSelectorComponent, if you change a setting twice (for example, change the buffer size twice) you'll crash.

I'm a bit of a Obj C newb, but it seems to be a memory issue with the wrapper. So I moved the [(id)wrapper release] call from the close() method, to the end of the deconstructor. No more crash. 

Hi all,

I spent a long time trying to make IAA audio work properly with juce,  I made Audiobus work easily from the start though. Audiobus is actually integrating IAA so there is no reason it should not work. here is what I did.

At the start, I was passing AudioUnit out of juce to reach my application code through an abstraction in order to integrate audiobus without coding  it into Juce. Audiobus just worked fine that way. But I still had a big problem with IAA, no host was able to launch my application. I tried maybe 5 times to redo my code, step by step following the Audiobus documentation,  nothing made a difference.

Then I integrated @ndika code, I did work with audiobus, and IAA host app were able to launch my application, then I mixed his code and mine to figure out that the problem is actually comming from the fact that juce uses deprecated Sessions methods such as "setSessionUInt32Property". Replacing these by the new "[AVAudioSession sharedInstance]" methods fixed some IAA issues. 

So at the moment I still have my audiobus code outside juce but with "AVAudioSession" methods in juce code. The behaviour is the same. Except that having Audiobus outside juce make it much more easier to use Triggers (play, rewind, rec).That was the reason I could not use @ndika full code.

At the moment I still experience an issue with IAA though,  when I am connected to a IAA host and I close the connection from the host (ie. IAA Host and GarageBand) there is no more audio in my app (no more callback I assume). I am investigating on this now, do you guys have the same issue ? I guess we have to set the session active somewhere to make it work. I unstuck it by changing my buffer size or anything that re-init the audio in juce "Audio Preference" pannel. I'll share whatever I find.
 

@ndika, thank you for sharing your code and bring more light on this

@monotomy, nice summary, it really helps

@Jules, would it be possible to have this deprecated code upgrade in a future juce update ?


Thank you,
Bastien


-----------------------------------------

Here is my modified juce code:

-----------------------------------------

    -- juce_AudioIODevice.h --
/*  You have access to this code from your application, this is to get AudioUnit. it is virtual in juce_AudioIODevice.h but implemented in juce_ios_Audio.mm */

    /** Returns the default buffer-size to use.
        @returns a number of samples
        @see getAvailableBufferSizes
    */
    virtual int getDefaultBufferSize() = 0;

    virtual void *getNativeAudioEngine() const //<<-----
    {
      return NULL;
    }
 
    //==============================================================================

 

-----------------------------------------------------------------------

-- juce_ios_Audio.mm --
//OLD comments are replaced deprecated juce code

//AB comments are code of audiobus, uncomment if you want to integrate in juce

//ADD comments are code I added to pass AudioUnit to you application

 


/*

This file is part of the JUCE library.
Copyright © 2013 - Raw Material Software Ltd.

Permission is granted to use this software under the terms of either:
a) the GPL v2 (or any later version)
b) the Affero GPL v3

Details of these licenses can be found at: www.gnu.org/licenses

JUCE is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.


To release a closed-source product which uses JUCE, commercial licenses are
available: visit www.juce.com for more information.

==============================================================================
*/

#import <AVFoundation/AVAudioSession.h>
#import <AudioToolbox/AudioToolbox.h>

}//out of namespace juce, audiobus must be implemented in global scope

//AB #include “juce_ios_AudioBus.mm”

namespace juce
{

class iOSAudioIODevice : public AudioIODevice
{
public:
iOSAudioIODevice (const String& deviceName)
: AudioIODevice (deviceName, “Audio”),
actualBufferSize (0),
isRunning (false),
audioUnit (0),
callback (nullptr),
floatData (1, 2)
{
//AB AudioBusWrapper* newAudioBusWrapper = [[AudioBusWrapper alloc] init];
//AB [newAudioBusWrapper retain];
//AB audioBusWrapper = newAudioBusWrapper;

    getSessionHolder().activeDevices.add (this);

    numInputChannels = 2;
    numOutputChannels = 2;
    preferredBufferSize = 0;

    updateDeviceInfo();
}

~iOSAudioIODevice()
{
    getSessionHolder().activeDevices.removeFirstMatchingValue (this);
    close();
}

StringArray getOutputChannelNames() override
{
    StringArray s;
    s.add ("Left");
    s.add ("Right");
    return s;
}

StringArray getInputChannelNames() override
{
    StringArray s;
    if (audioInputIsAvailable)
    {
        s.add ("Left");
        s.add ("Right");
    }
    return s;
}

Array&lt;double&gt; getAvailableSampleRates() override
{
    // can't find a good way to actually ask the device for which of these it supports..
    static const double rates[] = { 8000.0, 16000.0, 22050.0, 32000.0, 44100.0, 48000.0 };
    return Array&lt;double&gt; (rates, numElementsInArray (rates));
}

Array&lt;int&gt; getAvailableBufferSizes() override
{
    Array&lt;int&gt; r;

    for (int i = 6; i &lt; 12; ++i)
        r.add (1 &lt;&lt; i);

    return r;
}

int getDefaultBufferSize() override         { return 1024; }

String open (const BigInteger&amp; inputChannelsWanted,
             const BigInteger&amp; outputChannelsWanted,
             double targetSampleRate, int bufferSize) override
{
    close();

    lastError.clear();
    preferredBufferSize = (bufferSize &lt;= 0) ? getDefaultBufferSize() : bufferSize;

    //  xxx set up channel mapping

    activeOutputChans = outputChannelsWanted;
    activeOutputChans.setRange (2, activeOutputChans.getHighestBit(), false);
    numOutputChannels = activeOutputChans.countNumberOfSetBits();
    monoOutputChannelNumber = activeOutputChans.findNextSetBit (0);

    activeInputChans = inputChannelsWanted;
    activeInputChans.setRange (2, activeInputChans.getHighestBit(), false);
    numInputChannels = activeInputChans.countNumberOfSetBits();
    monoInputChannelNumber = activeInputChans.findNextSetBit (0);

//AB [(id)audioBusWrapper registerForRouteChangeNotification];

    if (numInputChannels &gt; 0 &amp;&amp; audioInputIsAvailable)
    {
      [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord
                      withOptions:  AVAudioSessionCategoryOptionMixWithOthers |
                                    AVAudioSessionCategoryOptionDefaultToSpeaker |
                                    AVAudioSessionCategoryOptionAllowBluetooth
                            error:  &amp;err];

// setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_PlayAndRecord);
// setSessionUInt32Property (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, 1);
}
else
{
// setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);

      [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback
                      withOptions: AVAudioSessionCategoryOptionMixWithOthers
                            error:  &amp;err];
    }

    [[AVAudioSession sharedInstance] setActive: YES error:  &amp;err];
  
   // AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);

   // fixAudioRouteIfSetToReceiver();

   // setSessionFloat64Property (kAudioSessionProperty_PreferredHardwareSampleRate, targetSampleRate);
    [[AVAudioSession sharedInstance]  setPreferredSampleRate:targetSampleRate error:&amp;err];
  
    updateDeviceInfo();

  //  setSessionFloat32Property (kAudioSessionProperty_PreferredHardwareIOBufferDuration, preferredBufferSize / sampleRate);
  
    [[AVAudioSession sharedInstance] setPreferredIOBufferDuration: preferredBufferSize/sampleRate error: &amp;err];
  
  
  
  
  updateCurrentBufferSize();

    prepareFloatBuffers (actualBufferSize);

    isRunning = true;
    routingChanged (nullptr);  // creates and starts the AU

    lastError = audioUnit != 0 ? "" : "Couldn't open the device";
    return lastError;
}

//ADD Returns AudioUnit Ptr to be able to integrate audiobus in the app code (this code is called in
virtual void getNativeAudioEngine() const
{
return (void
)&audioUnit;
}

void close() override
{
    if (isRunning)
    {
        isRunning = false;

//OLD setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);
//OLD
//OLD AudioSessionRemovePropertyListenerWithUserData (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);

      [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback
       
                      withOptions: AVAudioSessionCategoryOptionMixWithOthers
       
                            error:  &amp;err];
      
        [[AVAudioSession sharedInstance] setActive: NO error:  &amp;err];

        if (audioUnit != 0)
        {
            AudioComponentInstanceDispose (audioUnit);
            audioUnit = 0;
        }
    }

//AB [(id)audioBusWrapper release];
}

bool isOpen() override                       { return isRunning; }

int getCurrentBufferSizeSamples() override   { return actualBufferSize; }
double getCurrentSampleRate() override       { return sampleRate; }
int getCurrentBitDepth() override            { return 16; }

BigInteger getActiveOutputChannels() const override    { return activeOutputChans; }
BigInteger getActiveInputChannels() const override     { return activeInputChans; }

int getOutputLatencyInSamples() override
{
  double latency = [AVAudioSession sharedInstance].outputLatency;
  return roundToInt (latency * getCurrentSampleRate());
  //OLDreturn getLatency (kAudioSessionProperty_CurrentHardwareOutputLatency);
}
int getInputLatencyInSamples() override
{
  double latency = [AVAudioSession sharedInstance].inputLatency;
  return roundToInt (latency * getCurrentSampleRate());

//OLD return getLatency (kAudioSessionProperty_CurrentHardwareInputLatency);
}

//OLD int getLatency (AudioSessionPropertyID propID)
// {
// Float32 latency = 0;
// getSessionProperty (propID, latency);
// return roundToInt (latency * getCurrentSampleRate());
// }

void start (AudioIODeviceCallback* newCallback) override
{
    if (isRunning &amp;&amp; callback != newCallback)
    {
        if (newCallback != nullptr)
            newCallback-&gt;audioDeviceAboutToStart (this);

        const ScopedLock sl (callbackLock);
        callback = newCallback;
    }
}

void stop() override
{
    if (isRunning)
    {
        AudioIODeviceCallback* lastCallback;

        {
            const ScopedLock sl (callbackLock);
            lastCallback = callback;
            callback = nullptr;
        }

        if (lastCallback != nullptr)
            lastCallback-&gt;audioDeviceStopped();
    }
}

bool isPlaying() override            { return isRunning &amp;&amp; callback != nullptr; }
String getLastError() override       { return lastError; }

bool setAudioPreprocessingEnabled (bool enable) override
{
  return [[AVAudioSession sharedInstance] setMode: enable ? AVAudioSessionModeDefault : AVAudioSessionModeMeasurement
                           error:  &amp;err];
  //OLD return setSessionUInt32Property (kAudioSessionProperty_Mode, enable ? kAudioSessionMode_Default
  //                                                                      : kAudioSessionMode_Measurement);
}

private:
//==================================================================================================
CriticalSection callbackLock;
Float64 sampleRate;
int numInputChannels, numOutputChannels;
int preferredBufferSize, actualBufferSize;
bool isRunning;
String lastError;

AudioStreamBasicDescription format;
AudioUnit audioUnit;

//AB void* audioBusWrapper;
NSError* err;
UInt32 audioInputIsAvailable;
AudioIODeviceCallback* callback;
BigInteger activeOutputChans, activeInputChans;

AudioSampleBuffer floatData;
float* inputChannels[3];
float* outputChannels[3];
bool monoInputChannelNumber, monoOutputChannelNumber;

void prepareFloatBuffers (int bufferSize)
{
    if (numInputChannels + numOutputChannels &gt; 0)
    {
        floatData.setSize (numInputChannels + numOutputChannels, bufferSize);
        zeromem (inputChannels, sizeof (inputChannels));
        zeromem (outputChannels, sizeof (outputChannels));

        for (int i = 0; i &lt; numInputChannels; ++i)
            inputChannels[i] = floatData.getWritePointer (i);

        for (int i = 0; i &lt; numOutputChannels; ++i)
            outputChannels[i] = floatData.getWritePointer (i + numInputChannels);
    }
}

//==================================================================================================
OSStatus process (AudioUnitRenderActionFlags* flags, const AudioTimeStamp* nativeTimestamp,
                  const UInt32 numFrames, AudioBufferList* data)
{
    OSStatus err = noErr;

    if (audioInputIsAvailable &amp;&amp; numInputChannels &gt; 0)
        err = AudioUnitRender (audioUnit, flags, nativeTimestamp, 1, numFrames, data);

    const ScopedLock sl (callbackLock);

    if (callback != nullptr)
    {
        if ((int) numFrames &gt; floatData.getNumSamples())
            prepareFloatBuffers ((int) numFrames);

        if (audioInputIsAvailable &amp;&amp; numInputChannels &gt; 0)
        {
            short* shortData = (short*) data-&gt;mBuffers[0].mData;

            if (numInputChannels &gt;= 2)
            {
                for (UInt32 i = 0; i &lt; numFrames; ++i)
                {
                    inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);
                    inputChannels[1][i] = *shortData++ * (1.0f / 32768.0f);
                }
            }
            else
            {
                if (monoInputChannelNumber &gt; 0)
                    ++shortData;

                for (UInt32 i = 0; i &lt; numFrames; ++i)
                {
                    inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);
                    ++shortData;
                }
            }
        }
        else
        {
            for (int i = numInputChannels; --i &gt;= 0;)
                zeromem (inputChannels[i], sizeof (float) * numFrames);
        }

        callback-&gt;audioDeviceIOCallback ((const float**) inputChannels, numInputChannels,
                                         outputChannels, numOutputChannels, (int) numFrames, (void*)nativeTimestamp);

        short* shortData = (short*) data-&gt;mBuffers[0].mData;
        int n = 0;

        if (numOutputChannels &gt;= 2)
        {
            for (UInt32 i = 0; i &lt; numFrames; ++i)
            {
                shortData [n++] = (short) (outputChannels[0][i] * 32767.0f);
                shortData [n++] = (short) (outputChannels[1][i] * 32767.0f);
            }
        }
        else if (numOutputChannels == 1)
        {
            for (UInt32 i = 0; i &lt; numFrames; ++i)
            {
                const short s = (short) (outputChannels[monoOutputChannelNumber][i] * 32767.0f);
                shortData [n++] = s;
                shortData [n++] = s;
            }
        }
        else
        {
            zeromem (data-&gt;mBuffers[0].mData, 2 * sizeof (short) * numFrames);
        }
    }
    else
    {
        zeromem (data-&gt;mBuffers[0].mData, 2 * sizeof (short) * numFrames);
    }

    return err;
}

void updateDeviceInfo()
{
 //OLD   getSessionProperty (kAudioSessionProperty_CurrentHardwareSampleRate, sampleRate);
 //OLD   getSessionProperty (kAudioSessionProperty_AudioInputAvailable, audioInputIsAvailable);
  sampleRate = [AVAudioSession sharedInstance].sampleRate;
  audioInputIsAvailable = [AVAudioSession sharedInstance].inputAvailable;
}

void updateCurrentBufferSize()
{
  Float32 bufferDuration = sampleRate &gt; 0 ? (Float32) (preferredBufferSize / sampleRate) : 0.0f;
  //OLD getSessionProperty (kAudioSessionProperty_CurrentHardwareIOBufferDuration, bufferDuration);
  bufferDuration = [AVAudioSession sharedInstance].IOBufferDuration;
  actualBufferSize = (int) (sampleRate * bufferDuration + 0.5);
}

void routingChanged (const void* propertyValue)
{
    if (! isRunning)
        return;

    if (propertyValue != nullptr)
    {
        CFDictionaryRef routeChangeDictionary = (CFDictionaryRef) propertyValue;
        CFNumberRef routeChangeReasonRef = (CFNumberRef) CFDictionaryGetValue (routeChangeDictionary,
                                                                               CFSTR (kAudioSession_AudioRouteChangeKey_Reason));

        SInt32 routeChangeReason;
        CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &amp;routeChangeReason);

        if (routeChangeReason == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
        {
            const ScopedLock sl (callbackLock);

            if (callback != nullptr)
                callback-&gt;audioDeviceError ("Old device unavailable");
        }
    }

    updateDeviceInfo();
    createAudioUnit();

    [[AVAudioSession sharedInstance] setActive: YES error:  &amp;err];

    if (audioUnit != 0)
    {
        UInt32 formatSize = sizeof (format);
        AudioUnitGetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &amp;format, &amp;formatSize);

        updateCurrentBufferSize();
        AudioOutputUnitStart (audioUnit);
    }
}

//==================================================================================================
struct AudioSessionHolder
{
    AudioSessionHolder()
    {
        AudioSessionInitialize (0, 0, interruptionListenerCallback, this);
    }

    static void interruptionListenerCallback (void* client, UInt32 interruptionType)
    {
        const Array &lt;iOSAudioIODevice*&gt;&amp; activeDevices = static_cast &lt;AudioSessionHolder*&gt; (client)-&gt;activeDevices;

        for (int i = activeDevices.size(); --i &gt;= 0;)
            activeDevices.getUnchecked(i)-&gt;interruptionListener (interruptionType);
    }

    Array &lt;iOSAudioIODevice*&gt; activeDevices;
};

static AudioSessionHolder&amp; getSessionHolder()
{
    static AudioSessionHolder audioSessionHolder;
    return audioSessionHolder;
}

void interruptionListener (const UInt32 interruptionType)
{
    if (interruptionType == kAudioSessionBeginInterruption)
    {
        isRunning = false;
        AudioOutputUnitStop (audioUnit);
        [[AVAudioSession sharedInstance] setActive: NO error:  &amp;err];

        const ScopedLock sl (callbackLock);

        if (callback != nullptr)
            callback-&gt;audioDeviceError ("iOS audio session interruption");
    }

    if (interruptionType == kAudioSessionEndInterruption)
    {
        isRunning = true;
        [[AVAudioSession sharedInstance] setActive: YES error:  &amp;err];
        AudioOutputUnitStart (audioUnit);

        const ScopedLock sl (callbackLock);

        if (callback != nullptr)
            callback-&gt;audioDeviceError ("iOS audio session resumed");
    }
}

//==================================================================================================
static OSStatus processStatic (void* client, AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,
                               UInt32 /*busNumber*/, UInt32 numFrames, AudioBufferList* data)
{
    return static_cast&lt;iOSAudioIODevice*&gt; (client)-&gt;process (flags, time, numFrames, data);
}

static void routingChangedStatic (void* client, AudioSessionPropertyID, UInt32 /*inDataSize*/, const void* propertyValue)
{
    static_cast&lt;iOSAudioIODevice*&gt; (client)-&gt;routingChanged (propertyValue);
}

//==================================================================================================
void resetFormat (const int numChannels) noexcept
{
    zerostruct (format);
    format.mFormatID = kAudioFormatLinearPCM;
    format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
    format.mBitsPerChannel = 8 * sizeof (short);
    format.mChannelsPerFrame = (UInt32) numChannels;
    format.mFramesPerPacket = 1;
    format.mBytesPerFrame = format.mBytesPerPacket = (UInt32) numChannels * sizeof (short);
}

bool createAudioUnit()
{
    if (audioUnit != 0)
    {
        AudioComponentInstanceDispose (audioUnit);
        audioUnit = 0;
    }

    resetFormat (2);

    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;

    AudioComponent comp = AudioComponentFindNext (0, &amp;desc);
    AudioComponentInstanceNew (comp, &amp;audioUnit);

    if (audioUnit == 0)
        return false;

    if (numInputChannels &gt; 0)
    {
        const UInt32 one = 1;
        AudioUnitSetProperty (audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &amp;one, sizeof (one));
    }

    {
        AudioChannelLayout layout;
        layout.mChannelBitmap = 0;
        layout.mNumberChannelDescriptions = 0;
        layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Input,  0, &amp;layout, sizeof (layout));
        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Output, 0, &amp;layout, sizeof (layout));
    }

    {
        AURenderCallbackStruct inputProc;
        inputProc.inputProc = processStatic;
        inputProc.inputProcRefCon = this;
        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &amp;inputProc, sizeof (inputProc));
    }

    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,  0, &amp;format, sizeof (format));
    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &amp;format, sizeof (format));

    AudioUnitInitialize (audioUnit);

//AB [(id)audioBusWrapper activateAudiobus: audioUnit];

    return true;
}

// If the routing is set to go through the receiver (i.e. the speaker, but quiet), this re-routes it
// to make it loud. Needed because by default when using an input + output, the output is kept quiet.

//OLD static void fixAudioRouteIfSetToReceiver()
// {
// CFStringRef audioRoute = 0;
// if (getSessionProperty (kAudioSessionProperty_AudioRoute, audioRoute) == noErr)
// {
// NSString* route = (NSString*) audioRoute;
//
// //DBG ("audio route: " + nsStringToJuce (route));
//
// if ([route hasPrefix: @“Receiver”])
// setSessionUInt32Property (kAudioSessionProperty_OverrideAudioRoute, kAudioSessionOverrideAudioRoute_Speaker);
//
// CFRelease (audioRoute);
// }
// }

// template <typename Type>
// static OSStatus getSessionProperty (AudioSessionPropertyID propID, Type& result) noexcept
// {
// UInt32 valueSize = sizeof (result);
// return AudioSessionGetProperty (propID, &valueSize, &result);
// }
//
// static bool setSessionUInt32Property (AudioSessionPropertyID propID, UInt32 v) noexcept { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }
// static bool setSessionFloat32Property (AudioSessionPropertyID propID, Float32 v) noexcept { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }
// static bool setSessionFloat64Property (AudioSessionPropertyID propID, Float64 v) noexcept { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }

JUCE_DECLARE_NON_COPYABLE (iOSAudioIODevice)

};

//==============================================================================
class iOSAudioIODeviceType : public AudioIODeviceType
{
public:
iOSAudioIODeviceType() : AudioIODeviceType (“iOS Audio”) {}

void scanForDevices() {}
StringArray getDeviceNames (bool /*wantInputNames*/) const       { return StringArray ("iOS Audio"); }
int getDefaultDeviceIndex (bool /*forInput*/) const              { return 0; }
int getIndexOfDevice (AudioIODevice* d, bool /*asInput*/) const  { return d != nullptr ? 0 : -1; }
bool hasSeparateInputsAndOutputs() const                         { return false; }

AudioIODevice* createDevice (const String&amp; outputDeviceName, const String&amp; inputDeviceName)
{
    if (outputDeviceName.isNotEmpty() || inputDeviceName.isNotEmpty())
        return new iOSAudioIODevice (outputDeviceName.isNotEmpty() ? outputDeviceName
                                                                   : inputDeviceName);

    return nullptr;
}

private:
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (iOSAudioIODeviceType)
};

//==============================================================================
AudioIODeviceType* AudioIODeviceType::createAudioIODeviceType_iOSAudio()
{
return new iOSAudioIODeviceType();
}

 

-----------------------------------------------------------------------

 

-- juce_ios_AudioBus.mm --

Create this file besides juce_ios_Audio.mm if you integrate it in juce, this is audiobus code

//AUDIOBUS

#import "Audiobus.h"

@interface AudioBusWrapper : NSObject
{
  ABSenderPort *audiobusOutput;
}
@property (readonly) ABSenderPort* audiobusOutput;
@property (strong, nonatomic) ABAudiobusController* audiobusController;

- (void)registerForRouteChangeNotification;

-(void)observeValueForKeyPath:(NSString *)keyPath

                     ofObject:(id)object

                       change:(NSDictionary *)change

                      context:(void *)context;

- (void)routeChange:(NSNotification*)notification;

- (void)activateAudiobus:(AudioUnit)outputUnit;

@end

//-------------

static void * kAudiobusRunningOrConnectedChanged = &kAudiobusRunningOrConnectedChanged;

@implementation AudioBusWrapper

@synthesize audiobusOutput;

@synthesize audiobusController;

- (id) init
{
  if ((self = [super init]) != nil)
  {
    
  };
  return self;
}

- (void)dealloc
{
  [audiobusController removeObserver:self forKeyPath:@"connected"];
  [audiobusController removeObserver:self forKeyPath:@"audiobusAppRunning"];
  [super dealloc];
}

- (void) registerForRouteChangeNotification
{
  [[NSNotificationCenter defaultCenter] addObserver:self
                                        selector:@selector(routeChange:)
                                               name:AVAudioSessionRouteChangeNotification
   
                                             object:nil];
 
}

- (void)routeChange:(NSNotification*)notification {
  // It doesn't appear Juce needs to do anything with routing changes, so I haven't bothered with this yet:
  //owner->routingChanged (notification);
}


-(void)observeValueForKeyPath:(NSString *)keyPath

                     ofObject:(id)object

                       change:(NSDictionary *)change

                      context:(void *)context {
 
 
  if ( context == kAudiobusRunningOrConnectedChanged ) {
    
    
    
    //I created this AudiobusStatus singleton so that I can easily check audiobus' status from elsewhere in the program. I need to check the Audiobus connection status when going into the background for example. Could probably find a more elegant way to do this.
    
    // AudiobusStatus* statusObject = AudiobusStatus::getInstance();
    
    
    
    //statusObject->setConnected (audiobusController.audiobusConnected);
    
    //     statusObject->setRunning (audiobusController.audiobusAppRunning);
    
    
    
    //just testing
    
    if (!audiobusController.audiobusAppRunning) {
      
      // Audiobus has quit. Time to sleep.
      
      NSLog(@"Audiobus app has closed");
      
    }
    
    
    
    if(!audiobusController.audiobusConnected) {
      
      NSLog(@"App disconnected from Audiobus");
      
    }
    
  }
 
 
 
  else
    
  {
    
    [super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
    
  }

 
}

- (void)activateAudiobus:(AudioUnit)outputUnit;

{
  self.audiobusController = [[ABAudiobusController alloc] initWithApiKey:@"API KEY HERE"]; //use your API key here
 
 
 
  self.audiobusController.connectionPanelPosition = ABConnectionPanelPositionRight; //choose where you want the Audiobus navigation widget to show up
 
 
 
  // port information needs to match the information entered into the .plist (see audiobus integration guide)
 
  ABSenderPort *sender = [[ABSenderPort alloc] initWithName:@"AppName"
                          
                                                      title:NSLocalizedString(@"AppName: Output", @"")
                          
                                  audioComponentDescription:(AudioComponentDescription) {
                                    
                                    .componentType = kAudioUnitType_RemoteGenerator,
                                    
                                    .componentSubType = 'aout', // Note single quotes
                                    
                                    .componentManufacturer = 'juce' } //
                          
                                                  audioUnit:outputUnit];
 
  [audiobusController addSenderPort:sender];
 
 
 
  //would create filter or input ports here if I needed them
 
 
 
  // Watch the audiobusAppRunning and connected properties
 
  [audiobusController addObserver:self
   
                       forKeyPath:@"connected"
   
                          options:0
   
                          context:kAudiobusRunningOrConnectedChanged];
 
 
 
  [audiobusController addObserver:self
   
                       forKeyPath:@"audiobusAppRunning"
   
                          options:0
   
                          context:kAudiobusRunningOrConnectedChanged];
 
 

}

@end

 

-----------------------------------------------------------------------


-- juce_ios_Audio.cpp --
If you don't want to change the include somewhere else.


#include "juce_ios_Audio.mm"

-----------------------------------------------------------------------

 

 

At the moment I still experience an issue with IAA though,  when I am connected to a IAA host and I close the connection from the host (ie. IAA Host and GarageBand) there is no more audio in my app (no more callback I assume). I am investigating on this now, do you guys have the same issue ? I guess we have to set the session active somewhere to make it work. I unstuck it by changing my buffer size or anything that re-init the audio in juce "Audio Preference" pannel. I'll share whatever I find.

I found a way to fix this, actually IAA automaticaly stops audio when you disconnect it from host because your app is in background at this time and IAA closes automaticaly, it is a logical behaviour.
To fix this you just have to restart the audio engine when it goes back in foreground, look at my code (this is not in juce in my case, it is in my audiobus section on my app:

//this is my init method

 [[NSNotificationCenter defaultCenter] addObserver: self
                                           selector: @selector(appHasGoneInBackground)
                                               name: UIApplicationDidEnterBackgroundNotification
                                             object: nil];
 
  [[NSNotificationCenter defaultCenter] addObserver: self
                                           selector: @selector(appHasGoneForeground)
                                               name: UIApplicationWillEnterForegroundNotification
                                             object: nil];

  return self;
}


-(void) appHasGoneInBackground
{
  if ( !_audiobusController.connected && !_audiobusController.memberOfActiveAudiobusSession )
  {
    AudioOutputUnitStop (self.audioUnit);
//    [[AVAudioSession sharedInstance] setActive:NO error:NULL];
  }
}

-(void) appHasGoneForeground
{
  AudioOutputUnitStart (self.audioUnit);
 // [[AVAudioSession sharedInstance] setActive:YES error:NULL];
}


I do not use "setActive" here but I am not sure if I should stop/start it or not...

 

I've got this working in my synth app, and made some modifications I though you all might find useful:

1. Currently, any time you change a buffer size, sample rate, or restart the audio device, JUCE deletes the AudioIODevice and re-creates it. This was causing connectivity issues with Audiobus/IAA including loss of audio, duplicate Audiobus interfaces on the side of the screen, crashes, etc. So I made the AudioUnit and audiobus Wrapper static, and instead of deleting and recreating them, it simply stops and starts them.

2. Added background mode management, so if your app is in the background, and it's not connected to Audiobus or IAA, and there aren't any available MIDI Sources, the AudioUnit and AVAudioSession shut down. Good for the user not to have unnecessarily running apps, and I've heard Apple looks for this during the App Store review.

3. Added methods for retrieving IAA host info like getting the host icon, play time, playhead position, triggering record, play, etc.

juce_ios_Audio.mm

​
/*
  ==============================================================================
   This file is part of the JUCE library.
   Copyright (c) 2013 - Raw Material Software Ltd.
   Permission is granted to use this software under the terms of either:
   a) the GPL v2 (or any later version)
   b) the Affero GPL v3
   Details of these licenses can be found at: www.gnu.org/licenses
   JUCE is distributed in the hope that it will be useful, but WITHOUT ANY
   WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
   A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
   ------------------------------------------------------------------------------
   To release a closed-source product which uses JUCE, commercial licenses are
   available: visit www.juce.com for more information.
  ==============================================================================
*/

#include "juce_ios_Audio.h"

#import <AVFoundation/AVAudioSession.h>
#import <AudioToolbox/AudioToolbox.h>

} // juce namespace

// AB
#import "Audiobus.h"

@interface Wrapper : NSObject
{
    juce::iOSAudioIODevice* owner;
    
// AB
    ABSenderPort *audiobusOutput;
}

// AB
@property (readonly) ABSenderPort* audiobusOutput;
@property (strong, nonatomic) ABAudiobusController* audiobusController;
@property bool isRegistered;
@property bool isActivated;

- (void) stop;
- (void) start;
- (void)registerForRouteChangeNotification;
-(void)applicationDidEnterBackground:(NSNotification *)notification;
-(void)applicationWillEnterForeground:(NSNotification *)notification;
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context;
- (void)routeChange:(NSNotification*)notification;
- (void)activateAudiobus:(AudioUnit)outputUnit;

@end

static void * kMemberOfActiveAudiobusSessionChanged = &kMemberOfActiveAudiobusSessionChanged;
static void * kAudiobusConnectedChanged = &kAudiobusConnectedChanged;
static Wrapper *wrapper = nil;

@implementation Wrapper

// AB
@synthesize audiobusOutput;
@synthesize audiobusController;
@synthesize isRegistered;
@synthesize isActivated;

+ (id)sharedInstance
{
    if (wrapper == nil)
        wrapper = [[Wrapper alloc] init];
    
    return wrapper;
}

- (void)assignOwner: (juce::iOSAudioIODevice*) owner_
{
    owner = owner_;
}

- (void)dealloc {
    
// AB
    [audiobusController removeSenderPort:audiobusOutput];
    [[NSNotificationCenter defaultCenter] removeObserver:self];
    [audiobusController removeObserver:self forKeyPath:@"connected"];
    [audiobusController removeObserver:self forKeyPath:@"memberOfActiveAudiobusSession"];
    [super dealloc];
}

- (void) stop
{
    owner->stopAudioUnit();
}

- (void) start
{
    owner->startAudioUnit();
}

- (void) registerForRouteChangeNotification {
    
    if (!isRegistered)
    {
        [[NSNotificationCenter defaultCenter] addObserver:self
                                                 selector:@selector(routeChange:)
                                                     name:AVAudioSessionRouteChangeNotification
                                                   object:nil];
        
        isRegistered = true;
    }
}

- (void)routeChange:(NSNotification*)notification {
    
    if (owner != 0)
        owner->routingChanged (notification);
}

-(void)applicationDidEnterBackground:(NSNotification *)notification
{
    [self updateAudioEngineState];
}

-(void)applicationWillEnterForeground:(NSNotification *)notification
{
    bool running = false;
    UInt32 size = sizeof(running);
    AudioUnitGetProperty(owner->getAudioUnit(), kAudioOutputUnitProperty_IsRunning, kAudioUnitScope_Global, 0, &running, &size);
    
    if (!running || [ABAudioUnitFader transitionsRunning] )
    {
        // fade in is causing problems when launchign app from IAA host. So we'll just abruptly start it
        //[ABAudioUnitFader fadeInAudioUnit:owner->getAudioUnit() beginBlock:^{ [self start]; } completionBlock:nil];
        [self start];
    }
}

-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context {
    
    if ( context == kAudiobusConnectedChanged || context == kMemberOfActiveAudiobusSessionChanged )
    {
        /*
        bool memberChanged = context == kAudiobusConnectedChanged;
        bool connectChanged = context == kMemberOfActiveAudiobusSessionChanged;
        bool appInBG = [UIApplication sharedApplication].applicationState == UIApplicationStateBackground;
        bool kConnectChanged;
        
        if (kAudiobusConnectedChanged)
            kConnectChanged = true;
        else
            kConnectChanged = false;
         */
        
        if ( [UIApplication sharedApplication].applicationState == UIApplicationStateBackground
            && !audiobusController.connected
            && !audiobusController.memberOfActiveAudiobusSession )
        {
            // Audiobus session is finished. Time to sleep.
            [self stop];
        }
        
        if (kAudiobusConnectedChanged && audiobusController.connected )
        {
            // Make sure we're running, if we're connected
            [self start];
        }
    }
    else
    {
        [super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
    }
}

- (void)activateAudiobus:(AudioUnit)outputUnit;
{
// AB
    if (!isActivated)
    {
        audiobusController = [[ABAudiobusController alloc] initWithApiKey:@"YOUR_API_KEY"]; //use your API key here
        
        audiobusController.connectionPanelPosition = ABConnectionPanelPositionRight; //choose where you want the Audiobus navigation widget to show up
        
        
        
        // port information needs to match the information entered into the .plist (see audiobus integration guide)
        audiobusOutput = [[ABSenderPort alloc] initWithName:@"Company: App"
                                                            title:NSLocalizedString(@"Company: App", @"")
                                        audioComponentDescription:(AudioComponentDescription)
                                {
                                    .componentType = kAudioUnitType_RemoteInstrument,
                                    .componentSubType = 'SUBTYPE', // Note single quotes
                                    .componentManufacturer = 'MTFR'
                                } //
                                                        audioUnit:outputUnit];
        
        [audiobusController addSenderPort:audiobusOutput];
        
        //would create filter or input ports here if I needed them
        
        // Watch the audiobusAppRunning and connected properties
        [audiobusController addObserver:self
                             forKeyPath:@"connected"
                                options:0
                                context:kAudiobusConnectedChanged];
        
        [audiobusController addObserver:self
                             forKeyPath:@"memberOfActiveAudiobusSession"
                                options:0
                                context:kMemberOfActiveAudiobusSessionChanged];
        
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(applicationDidEnterBackground:) name:UIApplicationDidEnterBackgroundNotification object:nil];
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(applicationWillEnterForeground:) name:UIApplicationWillEnterForegroundNotification object:nil];
        
        isActivated = true;
    }
}

-(bool) areMIDISourcesOpen
{
    if (MIDIGetNumberOfSources() > 0)
        return true;
    else
        return false;
}

-(void) updateAudioEngineState
{
    if (![self areMIDISourcesOpen]
        && !audiobusController.connected
        && !audiobusController.memberOfActiveAudiobusSession
        && [UIApplication sharedApplication].applicationState == UIApplicationStateBackground)
    {
        [ABAudioUnitFader fadeOutAudioUnit:owner->getAudioUnit() completionBlock:^{ [self stop]; }];
    }
    else
    {
        [self start];
    }
}

-(bool) isHostConnectedViaAudiobus
{
    return audiobusController.audiobusConnected;
}

@end


namespace juce 
{
    AudioUnit iOSAudioIODevice::audioUnit = 0;
        
    iOSAudioIODevice::iOSAudioIODevice (const String& deviceName)
    : AudioIODevice (deviceName, "Audio"),
    actualBufferSize (0),
    isRunning (false),
    callback (nullptr),
    floatData (1, 2)
    {
        // NEW
        if (audioUnit != 0)
            AudioOutputUnitStop(audioUnit);

        [[Wrapper sharedInstance] assignOwner:this];
        
        getSessionHolder().activeDevices.add (this);
        numInputChannels = 2;
        numOutputChannels = 2;
        preferredBufferSize = 0;
        updateDeviceInfo();
    }

    iOSAudioIODevice::~iOSAudioIODevice()
    {
        setAudioUnitCallback(false);
        getSessionHolder().activeDevices.removeFirstMatchingValue (this);
        close();
    }

    StringArray iOSAudioIODevice::getOutputChannelNames()
    {
        StringArray s;
        s.add ("Left");
        s.add ("Right");
        return s;
    }

    StringArray iOSAudioIODevice::getInputChannelNames()
    {
        StringArray s;
        if (audioInputIsAvailable)
        {
            s.add ("Left");
            s.add ("Right");
        }
        return s;
    }

    Array<double> iOSAudioIODevice::getAvailableSampleRates()
    {
        // can't find a good way to actually ask the device for which of these it supports..
        static const double rates[] = { 8000.0, 16000.0, 22050.0, 32000.0, 44100.0, 48000.0 };
        return Array<double> (rates, numElementsInArray (rates));
    }

    Array<int> iOSAudioIODevice::getAvailableBufferSizes()
    {
        Array<int> r;
        for (int i = 6; i < 12; ++i)
            r.add (1 << i);
        return r;
    }

    int iOSAudioIODevice::getDefaultBufferSize()
    {
        return 1024;
    }

    String iOSAudioIODevice::open (const BigInteger& inputChannelsWanted,
                 const BigInteger& outputChannelsWanted,
                 double targetSampleRate, int bufferSize)
    {
        close();
        
        lastError.clear();
        preferredBufferSize = (bufferSize <= 0) ? getDefaultBufferSize() : bufferSize;
        //  xxx set up channel mapping
        activeOutputChans = outputChannelsWanted;
        activeOutputChans.setRange (2, activeOutputChans.getHighestBit(), false);
        numOutputChannels = activeOutputChans.countNumberOfSetBits();
        monoOutputChannelNumber = activeOutputChans.findNextSetBit (0);
        
        activeInputChans = inputChannelsWanted;
        activeInputChans.setRange (2, activeInputChans.getHighestBit(), false);
        numInputChannels = activeInputChans.countNumberOfSetBits();
        monoInputChannelNumber = activeInputChans.findNextSetBit (0);

        // OLD
        // AudioSessionSetActive (true);
        
        if (numInputChannels > 0 && audioInputIsAvailable)
        {
            // NEW
            [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord
                            withOptions: AVAudioSessionCategoryOptionMixWithOthers | AVAudioSessionCategoryOptionDefaultToSpeaker |AVAudioSessionCategoryOptionAllowBluetooth
                                  error:  &err];
            // OLD
            //setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_PlayAndRecord);
            //setSessionUInt32Property (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, 1);
        }
        else
        {
            // NEW
            [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback
                            withOptions: AVAudioSessionCategoryOptionMixWithOthers
                                  error:  &err];
            // OLD
            //setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);
        }

        // NEW
        [[AVAudioSession sharedInstance] setActive: YES error:  &err];
        
        if (audioUnit != 0)
            AudioOutputUnitStart(audioUnit);
        
        [[Wrapper sharedInstance] registerForRouteChangeNotification];
        [[AVAudioSession sharedInstance] setPreferredSampleRate:targetSampleRate error:&err];
        
        // OLD
        //AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);
        //fixAudioRouteIfSetToReceiver();
        //setSessionFloat64Property (kAudioSessionProperty_PreferredHardwareSampleRate, targetSampleRate);
        
        updateDeviceInfo();
        // NEW
        [[AVAudioSession sharedInstance] setPreferredIOBufferDuration: preferredBufferSize/sampleRate error: &err];
        
        // OLD
        //setSessionFloat32Property (kAudioSessionProperty_PreferredHardwareIOBufferDuration, preferredBufferSize / sampleRate);
        updateCurrentBufferSize();
        prepareFloatBuffers (actualBufferSize);
        isRunning = true;
        routingChanged (nullptr);  // creates and starts the AU
        lastError = audioUnit != 0 ? "" : "Couldn't open the device";
        return lastError;
    }

    void iOSAudioIODevice::close()
    {
        if (isRunning)
        {
            isRunning = false;
            // NEW
            [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback
                            withOptions: AVAudioSessionCategoryOptionMixWithOthers
                                  error:  &err];
            // OLD
            /*
            setSessionUInt32Property (kAudioSessionProperty_AudioCategory, kAudioSessionCategory_MediaPlayback);
            AudioSessionRemovePropertyListenerWithUserData (kAudioSessionProperty_AudioRouteChange, routingChangedStatic, this);
            AudioSessionSetActive (false);
            if (audioUnit != 0)
            {
                AudioComponentInstanceDispose (audioUnit);
                audioUnit = 0;
            }
             */
        }
        
        // NEW
        if (audioUnit != 0)
            AudioOutputUnitStop(audioUnit);
        
        [[AVAudioSession sharedInstance] setActive: NO error:  &err];
    }

    bool iOSAudioIODevice::isOpen()                       { return isRunning; }
    int iOSAudioIODevice::getCurrentBufferSizeSamples()   { return actualBufferSize; }
    double iOSAudioIODevice::getCurrentSampleRate()       { return sampleRate; }
    int iOSAudioIODevice::getCurrentBitDepth()            { return 16; }
    BigInteger iOSAudioIODevice::getActiveOutputChannels() const    { return activeOutputChans; }
    BigInteger iOSAudioIODevice::getActiveInputChannels() const     { return activeInputChans; }
    
    int iOSAudioIODevice::getOutputLatencyInSamples()
    {
        // NEW
        double latency = [AVAudioSession sharedInstance].outputLatency;
        return roundToInt (latency * getCurrentSampleRate());

        // OLD
        //return getLatency (kAudioSessionProperty_CurrentHardwareOutputLatency);
    }
        
    int iOSAudioIODevice::getInputLatencyInSamples()
    {
        // NEW
        double latency = [AVAudioSession sharedInstance].inputLatency;
        return roundToInt (latency * getCurrentSampleRate());

        // OLD
        //return getLatency (kAudioSessionProperty_CurrentHardwareInputLatency);
    }
        
    // OLD
    /*
    int iOSAudioIODevice::getLatency (AudioSessionPropertyID propID)
    {
        Float32 latency = 0;
        getSessionProperty (propID, latency);
        return roundToInt (latency * getCurrentSampleRate());
    }
    */
        
    void iOSAudioIODevice::start (AudioIODeviceCallback* newCallback)
    {
        if (isRunning && callback != newCallback)
        {
            if (newCallback != nullptr)
                newCallback->audioDeviceAboutToStart (this);

            const ScopedLock sl (callbackLock);
            callback = newCallback;
        }
    }

    void iOSAudioIODevice::stop()
    {
        if (isRunning)
        {
            AudioIODeviceCallback* lastCallback;
            {
                const ScopedLock sl (callbackLock);
                lastCallback = callback;
                callback = nullptr;
            }

            if (lastCallback != nullptr)
                lastCallback->audioDeviceStopped();
        }
    }

    bool iOSAudioIODevice::isPlaying()            { return isRunning && callback != nullptr; }
    String iOSAudioIODevice::getLastError()       { return lastError; }

    bool iOSAudioIODevice::setAudioPreprocessingEnabled (bool enable)
    {
        // NEW
        return [[AVAudioSession sharedInstance] setMode: enable ? AVAudioSessionModeDefault : AVAudioSessionModeMeasurement
                                 error:  &err];
        
        // OLD
        //return setSessionUInt32Property (kAudioSessionProperty_Mode, enable ? kAudioSessionMode_Default : kAudioSessionMode_Measurement);
    }
        
    // NEW
    void iOSAudioIODevice::routingChanged (const NSNotification* notification)
    {
        if (! isRunning)
            return;
        
        if (notification != nullptr)
        {
            //        CFDictionaryRef routeChangeDictionary = (CFDictionaryRef) propertyValue;
            //        CFNumberRef routeChangeReasonRef = (CFNumberRef) CFDictionaryGetValue (routeChangeDictionary,
            //                                                                                CFSTR (kAudioSession_AudioRouteChangeKey_Reason));
            //
            //        SInt32 routeChangeReason;
            //        CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason);
            //
            //        if (routeChangeReason == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
            //        {
            //            const ScopedLock sl (callbackLock);
            //
            //            if (callback != nullptr)
            //                callback->audioDeviceError ("Old device unavailable");
            //        }
            
            //again, not doing anything here, but if you wanted to:
            
            NSDictionary *routeChangeDict = notification.userInfo;
            
            NSInteger routeChangeReason = [[routeChangeDict valueForKey:AVAudioSessionRouteChangeReasonKey] integerValue];
            
            switch (routeChangeReason) {
                case AVAudioSessionRouteChangeReasonUnknown:
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonUnknown");
                    break;
                    
                case AVAudioSessionRouteChangeReasonNewDeviceAvailable:
                    // a headset was added or removed
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonNewDeviceAvailable");
                    break;
                    
                case AVAudioSessionRouteChangeReasonOldDeviceUnavailable:
                    // a headset was added or removed
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonOldDeviceUnavailable");
                    break;
                    
                case AVAudioSessionRouteChangeReasonCategoryChange:
                    // called at start - also when other audio wants to play
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonCategoryChange");//AVAudioSessionRouteChangeReasonCategoryChange
                    break;
                    
                case AVAudioSessionRouteChangeReasonOverride:
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonOverride");
                    break;
                    
                case AVAudioSessionRouteChangeReasonWakeFromSleep:
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonWakeFromSleep");
                    break;
                    
                case AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory:
                    NSLog(@"routeChangeReason : AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory");
                    break;
                    
                default:
                    break;
            }
            
            if (routeChangeReason == AVAudioSessionRouteChangeReasonOldDeviceUnavailable)
            {
                const ScopedLock sl (callbackLock);
                
                if (callback != nullptr)
                    callback->audioDeviceError ("Old device unavailable");
            }
        }
        
        updateDeviceInfo();
        
        if (audioUnit == 0)
            createAudioUnit();
        else
            setAudioUnitCallback(true);
        
        // OLD
        //AudioSessionSetActive (true);
        
        // NEW
        [[AVAudioSession sharedInstance] setActive: YES error:&err];
        
        if (audioUnit != 0)
        {
            AudioOutputUnitStart(audioUnit);
            
            UInt32 formatSize = sizeof (format);
            AudioUnitGetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, &formatSize);
            
            updateCurrentBufferSize();
            AudioOutputUnitStart (audioUnit);
        }
    }
        
    void iOSAudioIODevice::closeAudioUnit()
    {
        stopAudioUnit();
     
        if (audioUnit != 0)
        {
            AudioComponentInstanceDispose (audioUnit);
            audioUnit = 0;
        }
        
        [[Wrapper sharedInstance] release];
    }

    void iOSAudioIODevice::stopAudioUnit()
    {
        if (audioUnit != 0)
            AudioOutputUnitStop(audioUnit);
        
        [[AVAudioSession sharedInstance] setActive: NO error:  &err];
    }
        
    void iOSAudioIODevice::startAudioUnit()
    {
        if (audioUnit != 0)
            AudioOutputUnitStart(audioUnit);
        
        [[AVAudioSession sharedInstance] setActive: YES error:  &err];
    }
        
    void iOSAudioIODevice::toggleHostPlayback()
    {
        if (audioUnit != 0)
        {
            UInt32 controlEvent = kAudioUnitRemoteControlEvent_TogglePlayPause;
            UInt32 dataSize = sizeof(controlEvent);
        
            AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_RemoteControlToHost, kAudioUnitScope_Global, 0, &controlEvent, dataSize);
        }
    }
        
    void iOSAudioIODevice::toggleHostRecord()
    {
        if (audioUnit != 0)
        {
            UInt32 controlEvent = kAudioUnitRemoteControlEvent_ToggleRecord;
            UInt32 dataSize = sizeof(controlEvent);
            
            AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_RemoteControlToHost, kAudioUnitScope_Global, 0, &controlEvent, dataSize);
        }
    }
        
    void iOSAudioIODevice::toggleHostRewind()
    {
        if (audioUnit != 0)
        {
            UInt32 controlEvent = kAudioUnitRemoteControlEvent_Rewind;
            UInt32 dataSize = sizeof(controlEvent);
            
            AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_RemoteControlToHost, kAudioUnitScope_Global, 0, &controlEvent, dataSize);
        }
    }

    void* iOSAudioIODevice::getHostIcon()
    {
        if (audioUnit != 0)
            return AudioOutputUnitGetHostIcon(audioUnit, 114);
        
        return nullptr;
    }
        
    void iOSAudioIODevice::goToHost()
    {
        if (audioUnit != 0)
        {
            CFURLRef instrumentUrl;
            UInt32 dataSize = sizeof(instrumentUrl);
            OSStatus result = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_PeerURL, kAudioUnitScope_Global, 0, &instrumentUrl, &dataSize);
            
            if (result == noErr)
                [[UIApplication sharedApplication] openURL:(NSURL*)instrumentUrl];
        }
    }
        
    String iOSAudioIODevice::getHostPlayTime()
    {
        float hostPlayTime = 0.0;
        
        if (isHostConnectedViaIAA() && [UIApplication sharedApplication].applicationState !=  UIApplicationStateBackground)
        {
            HostCallbackInfo hostCallbackInfo;
            UInt32 dataSize = sizeof(HostCallbackInfo);
            OSStatus result = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_HostCallbacks, kAudioUnitScope_Global, 0, &hostCallbackInfo, &dataSize);
            
            if (result == noErr)
            {
                Boolean isPlaying  = false;
                Boolean isRecording = false;
                Float64 outCurrentSampleInTimeLine = 0;
                void* hostUserData = hostCallbackInfo.hostUserData;
                
                OSStatus result =  hostCallbackInfo.transportStateProc2(hostUserData,
                                                                        &isPlaying,
                                                                        &isRecording, NULL,
                                                                        &outCurrentSampleInTimeLine,
                                                                        NULL, NULL, NULL);
                
                
                
                if (result == noErr)
                    hostPlayTime = outCurrentSampleInTimeLine;
                else
                    NSLog(@"Error occured fetching callBackInfo->transportStateProc2 : %d", (int)result);
            }
            
        }
        
        if (hostPlayTime < 0.0)
            hostPlayTime = 0.0;
        
        int totalMilliseconds = hostPlayTime / [[AVAudioSession sharedInstance] sampleRate] * 1000.0;
        int minutes = totalMilliseconds / 60000;
        int secondsLeft = totalMilliseconds % 60000;
        int seconds = secondsLeft / 1000;
        int milliseconds = secondsLeft % 1000;
        
        String minutesString = String(minutes);
        String secondsString = String(seconds);
        String millisecondsString = String(milliseconds);;
        
        if (minutes < 10)
            minutesString = "0" + minutesString;
        
        if (seconds < 10)
            secondsString = "0" + secondsString;
        
        if (milliseconds < 10)
            millisecondsString = "00" + millisecondsString;
        else if (milliseconds < 100)
            millisecondsString = "0" + millisecondsString;
        
        return minutesString + ":" + secondsString + ":" + millisecondsString;
    }
    
    float iOSAudioIODevice::getHostTempo()
    {
        float tempo = 120.0;
        
        if (isHostConnectedViaIAA())
        {
            HostCallbackInfo hostCallbackInfo;
            UInt32 dataSize = sizeof(HostCallbackInfo);
            OSStatus result = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_HostCallbacks, kAudioUnitScope_Global, 0, &hostCallbackInfo, &dataSize);
            
            if (result == noErr)
            {
                Float64 outCurrentBeat = 0;
                Float64 outCurrentTempo = 0;
                void* hostUserData = hostCallbackInfo.hostUserData;
                
                OSStatus result = hostCallbackInfo.beatAndTempoProc(hostUserData, &outCurrentBeat, &outCurrentTempo);
                
                if (result == noErr)
                    tempo = outCurrentTempo;
                else
                    NSLog(@"Error occured fetching callBackInfo->beatAndTempoProc : %d", (int)result);
            }
        }
        
        return tempo;
    }
    
    void iOSAudioIODevice::getHostPlayHeadPositionInfo(double* ppqPosition, double* ppqPositionOfLastBarStart)
    {
        if (isHostConnectedViaIAA())
        {
            HostCallbackInfo hostCallbackInfo;
            UInt32 dataSize = sizeof(HostCallbackInfo);
            OSStatus result = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_HostCallbacks, kAudioUnitScope_Global, 0, &hostCallbackInfo, &dataSize);
            
            if (result == noErr)
            {
                UInt32 outDeltaSampleOffsetToNextBeat = 0;
                Float32 outTimeSig_Numerator = 4.0;
                UInt32 outTimeSig_Denominator = 4;
                Float64 outCurrentMeasureDownBeat = 0.0;
                void* hostUserData = hostCallbackInfo.hostUserData;
                
                OSStatus result =  hostCallbackInfo.musicalTimeLocationProc(hostUserData,
                                                                            &outDeltaSampleOffsetToNextBeat,
                                                                            &outTimeSig_Numerator,
                                                                            &outTimeSig_Denominator,
                                                                            &outCurrentMeasureDownBeat);
                
                if (result == noErr)
                {
                    *ppqPositionOfLastBarStart = outCurrentMeasureDownBeat;
                    
                    Float64 outCurrentBeat = 0;
                    Float64 outCurrentTempo = 0;
                    void* hostUserData = hostCallbackInfo.hostUserData;
                    
                    OSStatus result = hostCallbackInfo.beatAndTempoProc(hostUserData, &outCurrentBeat, &outCurrentTempo);
                    
                    if (result == noErr)
                    {
                        *ppqPosition = outCurrentBeat;
                    }
                    else
                        NSLog(@"Error occured fetching callBackInfo->beatAndTempoProc : %d", (int)result);
                }
                else
                    NSLog(@"Error occured fetching callBackInfo->musicalTimeLocationProc : %d", (int)result);
            }
            
        }
    }
    
    bool iOSAudioIODevice::isHostConnectedViaIAA()
    {
        if (audioUnit != 0)
        {
            if ([[Wrapper sharedInstance] isHostConnectedViaAudiobus])
                return false;
            
            UInt32 connect;
            UInt32 dataSize = sizeof(UInt32);
            AudioUnitGetProperty(audioUnit, kAudioUnitProperty_IsInterAppConnected, kAudioUnitScope_Global, 0, &connect, &dataSize);
            
            return connect;
        }
        
        return false;
    }
    
    void iOSAudioIODevice::updateAudioEngineState()
    {
        [[Wrapper sharedInstance] updateAudioEngineState];
    }
        
    void iOSAudioIODevice::prepareFloatBuffers (int bufferSize)
    {
        if (numInputChannels + numOutputChannels > 0)
        {
            floatData.setSize (numInputChannels + numOutputChannels, bufferSize);
            zeromem (inputChannels, sizeof (inputChannels));
            zeromem (outputChannels, sizeof (outputChannels));
            for (int i = 0; i < numInputChannels; ++i)
                inputChannels[i] = floatData.getWritePointer (i);
            for (int i = 0; i < numOutputChannels; ++i)
                outputChannels[i] = floatData.getWritePointer (i + numInputChannels);
        }
    }

    //==================================================================================================
    OSStatus iOSAudioIODevice::process (AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,
                      const UInt32 numFrames, AudioBufferList* data)
    {
        OSStatus err = noErr;

        if (audioInputIsAvailable && numInputChannels > 0)
            err = AudioUnitRender (audioUnit, flags, time, 1, numFrames, data);

        const ScopedLock sl (callbackLock);

        if (callback != nullptr)
        {
            if ((int) numFrames > floatData.getNumSamples())
                prepareFloatBuffers ((int) numFrames);

            if (audioInputIsAvailable && numInputChannels > 0)
            {
                short* shortData = (short*) data->mBuffers[0].mData;

                if (numInputChannels >= 2)
                {
                    for (UInt32 i = 0; i < numFrames; ++i)
                    {
                        inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);
                        inputChannels[1][i] = *shortData++ * (1.0f / 32768.0f);
                    }
                }
                else
                {
                    if (monoInputChannelNumber > 0)
                        ++shortData;
                    for (UInt32 i = 0; i < numFrames; ++i)
                    {
                        inputChannels[0][i] = *shortData++ * (1.0f / 32768.0f);
                        ++shortData;
                    }
                }
            }
            else
            {
                for (int i = numInputChannels; --i >= 0;)
                    zeromem (inputChannels[i], sizeof (float) * numFrames);
            }

            callback->audioDeviceIOCallback ((const float**) inputChannels, numInputChannels,
                                             outputChannels, numOutputChannels, (int) numFrames);

            short* shortData = (short*) data->mBuffers[0].mData;
            int n = 0;

            if (numOutputChannels >= 2)
            {
                for (UInt32 i = 0; i < numFrames; ++i)
                {
                    shortData [n++] = (short) (outputChannels[0][i] * 32767.0f);
                    shortData [n++] = (short) (outputChannels[1][i] * 32767.0f);
                }
            }
            else if (numOutputChannels == 1)
            {
                for (UInt32 i = 0; i < numFrames; ++i)
                {
                    const short s = (short) (outputChannels[monoOutputChannelNumber][i] * 32767.0f);
                    shortData [n++] = s;
                    shortData [n++] = s;
                }
            }
            else
            {
                zeromem (data->mBuffers[0].mData, 2 * sizeof (short) * numFrames);
            }
        }
        else
        {
            zeromem (data->mBuffers[0].mData, 2 * sizeof (short) * numFrames);
        }
            return err;
    }

    void iOSAudioIODevice::updateDeviceInfo()
    {
        // NEW
        sampleRate = [AVAudioSession sharedInstance].sampleRate;
        audioInputIsAvailable = [AVAudioSession sharedInstance].inputAvailable;

        // OLD
        //getSessionProperty (kAudioSessionProperty_CurrentHardwareSampleRate, sampleRate);
        //getSessionProperty (kAudioSessionProperty_AudioInputAvailable, audioInputIsAvailable);
    }
    void iOSAudioIODevice::updateCurrentBufferSize()
    {
        Float32 bufferDuration = sampleRate > 0 ? (Float32) (preferredBufferSize / sampleRate) : 0.0f;
        
        // NEW
        bufferDuration = [AVAudioSession sharedInstance].IOBufferDuration;

        // OLD
        //getSessionProperty (kAudioSessionProperty_CurrentHardwareIOBufferDuration, bufferDuration);
        actualBufferSize = (int) (sampleRate * bufferDuration + 0.5);
    }

    // OLD
    /*
    void iOSAudioIODevice::routingChanged (const void* propertyValue)
    {
        if (! isRunning)
            return;

        if (propertyValue != nullptr)
        {
            CFDictionaryRef routeChangeDictionary = (CFDictionaryRef) propertyValue;
            CFNumberRef routeChangeReasonRef = (CFNumberRef) CFDictionaryGetValue (routeChangeDictionary,
                                                                                   CFSTR (kAudioSession_AudioRouteChangeKey_Reason));
            SInt32 routeChangeReason;
            CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason);

            if (routeChangeReason == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
            {
                const ScopedLock sl (callbackLock);
                if (callback != nullptr)
                    callback->audioDeviceError ("Old device unavailable");
            }
        }

        updateDeviceInfo();
        createAudioUnit();
        AudioSessionSetActive (true);

        if (audioUnit != 0)
        {
            UInt32 formatSize = sizeof (format);
            AudioUnitGetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, &formatSize);
            updateCurrentBufferSize();
            AudioOutputUnitStart (audioUnit);
        }
    }
    */
        
    void iOSAudioIODevice::setAudioUnitCallback(bool isEnabled)
    {
        AURenderCallbackStruct inputProc;
        
        if (isEnabled)
        {
            inputProc.inputProc = processStatic;
            inputProc.inputProcRefCon = this;
        }
        else
        {
            inputProc.inputProc = nullptr;
            inputProc.inputProcRefCon = nullptr;
        }
        
        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &inputProc, sizeof (inputProc));
    }

    void iOSAudioIODevice::interruptionListener (const UInt32 interruptionType)
    {
        if (interruptionType == kAudioSessionBeginInterruption)
        {
            isRunning = false;
            AudioOutputUnitStop (audioUnit);
            
            // NEW
            [[AVAudioSession sharedInstance] setActive: NO error:&err];
            
            // OLD
            //AudioSessionSetActive (false);
            
            const ScopedLock sl (callbackLock);

            if (callback != nullptr)
                callback->audioDeviceError ("iOS audio session interruption");
        }

        if (interruptionType == kAudioSessionEndInterruption)
        {
            isRunning = true;
            
            // NEW
            [[AVAudioSession sharedInstance] setActive: YES error:&err];

            // OLD
            //AudioSessionSetActive (true);
            
            AudioOutputUnitStart (audioUnit);
            const ScopedLock sl (callbackLock);

            if (callback != nullptr)
                callback->audioDeviceError ("iOS audio session resumed");
        }
    }

    //==================================================================================================
    void iOSAudioIODevice::resetFormat (const int numChannels) noexcept
    {
        zerostruct (format);
        format.mFormatID = kAudioFormatLinearPCM;
        format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
        format.mBitsPerChannel = 8 * sizeof (short);
        format.mChannelsPerFrame = (UInt32) numChannels;
        format.mFramesPerPacket = 1;
        format.mBytesPerFrame = format.mBytesPerPacket = (UInt32) numChannels * sizeof (short);
    }

    bool iOSAudioIODevice::createAudioUnit()
    {
        //OLD
        /*
        if (audioUnit != 0)
        {
            AudioComponentInstanceDispose (audioUnit);
            audioUnit = 0;
        }
         */
        resetFormat (2);
        
        AudioComponentDescription desc;
        desc.componentType = kAudioUnitType_Output;
        desc.componentSubType = kAudioUnitSubType_RemoteIO;
        desc.componentManufacturer = kAudioUnitManufacturer_Apple;
        desc.componentFlags = 0;
        desc.componentFlagsMask = 0;
        AudioComponent comp = AudioComponentFindNext (0, &desc);
        AudioComponentInstanceNew (comp, &audioUnit);
        
        if (audioUnit == 0)
            return false;

        if (numInputChannels > 0)
        {
            const UInt32 one = 1;
            AudioUnitSetProperty (audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof (one));
        }

        {
            AudioChannelLayout layout;
            layout.mChannelBitmap = 0;
            layout.mNumberChannelDescriptions = 0;
            layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
            AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Input,  0, &layout, sizeof (layout));
            AudioUnitSetProperty (audioUnit, kAudioUnitProperty_AudioChannelLayout, kAudioUnitScope_Output, 0, &layout, sizeof (layout));
        }

        {
            // NEW
            setAudioUnitCallback(true);
            
            // OLD
            /*
            AURenderCallbackStruct inputProc;
            inputProc.inputProc = processStatic;
            inputProc.inputProcRefCon = this;
            AudioUnitSetProperty (audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &inputProc, sizeof (inputProc));
             */
        }

        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,  0, &format, sizeof (format));

        AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, sizeof (format));

        AudioUnitInitialize (audioUnit);
        
        // AB
        [[Wrapper sharedInstance] activateAudiobus: audioUnit];
        
        return true;
    }
    // OLD
    /*
    // If the routing is set to go through the receiver (i.e. the speaker, but quiet), this re-routes it
    // to make it loud. Needed because by default when using an input + output, the output is kept quiet.
    void iOSAudioIODevice::fixAudioRouteIfSetToReceiver()
    {
        CFStringRef audioRoute = 0;
        if (getSessionProperty (kAudioSessionProperty_AudioRoute, audioRoute) == noErr)
        {
            NSString* route = (NSString*) audioRoute;
            //DBG ("audio route: " + nsStringToJuce (route));
            if ([route hasPrefix: @"Receiver"])
                setSessionUInt32Property (kAudioSessionProperty_OverrideAudioRoute, kAudioSessionOverrideAudioRoute_Speaker);
            CFRelease (audioRoute);
        }
    }
    */

    //==============================================================================
    class iOSAudioIODeviceType  : public AudioIODeviceType
    {
    public:
        iOSAudioIODeviceType()  : AudioIODeviceType ("iOS Audio") {}
        
        void scanForDevices() {}
        StringArray getDeviceNames (bool /*wantInputNames*/) const       { return StringArray ("iOS Audio"); }
        int getDefaultDeviceIndex (bool /*forInput*/) const              { return 0; }
        int getIndexOfDevice (AudioIODevice* d, bool /*asInput*/) const  { return d != nullptr ? 0 : -1; }
        bool hasSeparateInputsAndOutputs() const                         { return false; }
        
        AudioIODevice* createDevice (const String& outputDeviceName, const String& inputDeviceName)
        {
            if (outputDeviceName.isNotEmpty() || inputDeviceName.isNotEmpty())
                return new iOSAudioIODevice (outputDeviceName.isNotEmpty() ? outputDeviceName
                                             : inputDeviceName);
            
            return nullptr;
        }
        
    private:
        JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (iOSAudioIODeviceType)
    };

    //==============================================================================
    AudioIODeviceType* AudioIODeviceType::createAudioIODeviceType_iOSAudio()
    {
        return new iOSAudioIODeviceType();
    }

juce_ios_Audio.h

​
//
//  juce_ios_Audio.h
//  Syntorial
//
//  Created by Joe Hanley on 2/4/15.
//
//
#ifndef Syntorial_juce_ios_Audio_h
#define Syntorial_juce_ios_Audio_h

class iOSAudioIODevice  : public AudioIODevice
{
public:
    iOSAudioIODevice (const String& deviceName);
    ~iOSAudioIODevice();
    
    StringArray getOutputChannelNames() override;
    StringArray getInputChannelNames() override;
    
    Array<double> getAvailableSampleRates() override;
    
    Array<int> getAvailableBufferSizes() override;
    
    int getDefaultBufferSize() override;
    
    String open (const BigInteger& inputChannelsWanted,
                 const BigInteger& outputChannelsWanted,
                 double targetSampleRate, int bufferSize) override;
    
    void close() override;
    
    bool isOpen() override;
    
    int getCurrentBufferSizeSamples() override;
    double getCurrentSampleRate() override;
    int getCurrentBitDepth() override;
    
    BigInteger getActiveOutputChannels() const override;
    BigInteger getActiveInputChannels() const override;
    
    int getOutputLatencyInSamples() override;
    int getInputLatencyInSamples() override;
    
    // OLD
    //int getLatency (AudioSessionPropertyID propID);
    
    void start (AudioIODeviceCallback* newCallback) override;
    
    void stop() override;
    
    bool isPlaying() override;
    String getLastError() override;
    
    bool setAudioPreprocessingEnabled (bool enable) override;
    
    // NEW
    void routingChanged (const NSNotification* notification);
    void closeAudioUnit();
    void stopAudioUnit();
    void startAudioUnit();
    AudioUnit getAudioUnit() {return audioUnit;}
    void toggleHostPlayback();
    void toggleHostRecord();
    void toggleHostRewind();
    void* getHostIcon();
    void goToHost();
    String getHostPlayTime();
    float getHostTempo();
    void getHostPlayHeadPositionInfo(double* ppqPosition, double* ppqPositionOfLastBarStart);
    bool isHostConnectedViaIAA();
    static void updateAudioEngineState();
    
    void setAudioUnitCallback(bool isEnabled);
    
private:
    //==================================================================================================
    
    // NEW
    NSError* err;
    
    CriticalSection callbackLock;
    Float64 sampleRate;
    int numInputChannels, numOutputChannels;
    int preferredBufferSize, actualBufferSize;
    bool isRunning;
    String lastError;
    
    AudioStreamBasicDescription format;
    static AudioUnit audioUnit;
    UInt32 audioInputIsAvailable;
    AudioIODeviceCallback* callback;
    BigInteger activeOutputChans, activeInputChans;
    
    AudioSampleBuffer floatData;
    float* inputChannels[3];
    float* outputChannels[3];
    bool monoInputChannelNumber, monoOutputChannelNumber;
    
    void prepareFloatBuffers (int bufferSize);
    
    //==================================================================================================
    OSStatus process (AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,
                      const UInt32 numFrames, AudioBufferList* data);
    
    void updateDeviceInfo();
    
    void updateCurrentBufferSize();
    
    // OLD
    //void routingChanged (const void* propertyValue);
    
    //==================================================================================================
    struct AudioSessionHolder
    {
        AudioSessionHolder()
        {
            // OLD
            //AudioSessionInitialize (0, 0, interruptionListenerCallback, this);
        }
        
        static void interruptionListenerCallback (void* client, UInt32 interruptionType)
        {
            const Array <iOSAudioIODevice*>& activeDevices = static_cast <AudioSessionHolder*> (client)->activeDevices;
            
            for (int i = activeDevices.size(); --i >= 0;)
                activeDevices.getUnchecked(i)->interruptionListener (interruptionType);
        }
        
        Array <iOSAudioIODevice*> activeDevices;
    };
    
    static AudioSessionHolder& getSessionHolder()
    {
        static AudioSessionHolder audioSessionHolder;
        return audioSessionHolder;
    }
    
    void interruptionListener (const UInt32 interruptionType);
    
    //==================================================================================================
    static OSStatus processStatic (void* client, AudioUnitRenderActionFlags* flags, const AudioTimeStamp* time,
                                   UInt32 /*busNumber*/, UInt32 numFrames, AudioBufferList* data)
    {
        return static_cast<iOSAudioIODevice*> (client)->process (flags, time, numFrames, data);
    }
    
    // OLD
    //static void routingChangedStatic (void* client, AudioSessionPropertyID, UInt32 /*inDataSize*/, const void* propertyValue)
    //{
    //   static_cast<iOSAudioIODevice*> (client)->routingChanged (propertyValue);
    //}
    
    //==================================================================================================
    void resetFormat (const int numChannels) noexcept;
    
    bool createAudioUnit();
    
    // OLD
    /*
    // If the routing is set to go through the receiver (i.e. the speaker, but quiet), this re-routes it
    // to make it loud. Needed because by default when using an input + output, the output is kept quiet.
    static void fixAudioRouteIfSetToReceiver();
    
    template <typename Type>
    static OSStatus getSessionProperty (AudioSessionPropertyID propID, Type& result) noexcept
    {
        UInt32 valueSize = sizeof (result);
        return AudioSessionGetProperty (propID, &valueSize, &result);
    }
    
    static bool setSessionUInt32Property  (AudioSessionPropertyID propID, UInt32  v) noexcept  { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }
    static bool setSessionFloat32Property (AudioSessionPropertyID propID, Float32 v) noexcept  { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }
    static bool setSessionFloat64Property (AudioSessionPropertyID propID, Float64 v) noexcept  { return AudioSessionSetProperty (propID, sizeof (v), &v) == kAudioSessionNoError; }
    */
    
    JUCE_DECLARE_NON_COPYABLE (iOSAudioIODevice)
};

#endif

Add these virtual methods to juce_AudioIODevice.h

​
    virtual void closeAudioUnit() {}
    virtual void toggleHostPlayback() {}
    virtual void toggleHostRecord() {}
    virtual void toggleHostRewind() {}
    virtual void* getHostIcon() {return nullptr;}
    virtual void goToHost() {}
    virtual String getHostPlayTime() {return "00:00:000";}
    virtual float getHostTempo() {return 120.0;}
    virtual void getHostPlayHeadPositionInfo(double* ppqPosition, double* ppqPositionOfLastBarStart) {}
    virtual bool isHostConnectedViaIAA() {return false;}

Delete the AudioUnit and Wrapper from AudioDeviceManager.cpp's deconstructor

​
AudioDeviceManager::~AudioDeviceManager()
{
    currentAudioDevice->closeAudioUnit();
    
    currentAudioDevice = nullptr;
    defaultMidiOutput = nullptr;
}

In juce_mac_CoreMIDI.cpp change the globalSystemChangeCallback method, so that it notifies the iOSAudioIODevice when a MIDI source is added or removed:

​
static void globalSystemChangeCallback (const MIDINotification* notification, void*)
{
#if JUCE_IOS
        
        if (notification->messageID == kMIDIMsgObjectAdded || notification->messageID == kMIDIMsgObjectRemoved)
            iOSAudioIODevice::updateAudioEngineState();
#endif
        
        // TODO.. Should pass-on this notification..
}

And lastly, a UIImageConverter class to take the UIImage of the host icon and convert it into a Juce image:

UIImageConvert.h

​
#ifndef __UIImageConverter__
#define __UIImageConverter__

class UIImageConverter
{
public:
    UIImageConverter() {}
    ~UIImageConverter() {}
    
    static Image convertToJuceImage(void* uiImage_);
    
    
private:
};
#endif /* defined(__UIImageConverter__) */

UIImageConverter.cpp

#import <UIKit/UIKit.h>
#include "UIImageConverter.h"

class CoreGraphicsImageSimple : public ImagePixelData
{
public:
    CoreGraphicsImageSimple (const Image::PixelFormat format, const int w, const int h, const bool clearImage)
    : ImagePixelData (format, w, h), cachedImageRef (0)
    {
        pixelStride = format == Image::RGB ? 3 : ((format == Image::ARGB) ? 4 : 1);
        lineStride = (pixelStride * jmax (1, width) + 3) & ~3;
        
        imageData.allocate ((size_t) (lineStride * jmax (1, height)), clearImage);
        
        CGColorSpaceRef colourSpace = (format == Image::SingleChannel) ? CGColorSpaceCreateDeviceGray()
        : CGColorSpaceCreateDeviceRGB();
        
        context = CGBitmapContextCreate (imageData, (size_t) width, (size_t) height, 8, (size_t) lineStride,
                                         colourSpace, getCGImageFlags (format));
        
        CGColorSpaceRelease (colourSpace);
    }
    
    ~CoreGraphicsImageSimple()
    {
        freeCachedImageRef();
        CGContextRelease (context);
    }
    
    
    
    //==============================================================================
    CGContextRef context;
    CGImageRef cachedImageRef;
    HeapBlock<uint8> imageData;
    int pixelStride, lineStride;
    
private:
    void freeCachedImageRef()
    {
        if (cachedImageRef != 0)
        {
            CGImageRelease (cachedImageRef);
            cachedImageRef = 0;
        }
    }
    
    static CGBitmapInfo getCGImageFlags (const Image::PixelFormat& format)
    {
#if JUCE_BIG_ENDIAN
        return format == Image::ARGB ? (kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big) : kCGBitmapByteOrderDefault;
#else
        return format == Image::ARGB ? (kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little) : kCGBitmapByteOrderDefault;
#endif
    }
    
    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (CoreGraphicsImageSimple)
};

Image UIImageConverter::convertToJuceImage(void *uiImage_)
{
    UIImage* uiImage = (UIImage*) uiImage_;
    CGImageRef loadedImage = uiImage.CGImage;
    
    if (loadedImage != 0)
    {
        CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo (loadedImage);
        const bool hasAlphaChan = (alphaInfo != kCGImageAlphaNone
                                   && alphaInfo != kCGImageAlphaNoneSkipLast
                                   && alphaInfo != kCGImageAlphaNoneSkipFirst);
        
        Image image (NativeImageType().create (Image::ARGB, // (CoreImage doesn't work with 24-bit images)
                                               (int) CGImageGetWidth (loadedImage),
                                               (int) CGImageGetHeight (loadedImage),
                                               hasAlphaChan));
        
        CoreGraphicsImageSimple* const cgImage = static_cast<CoreGraphicsImageSimple*> (image.getPixelData());
       
        jassert (cgImage != nullptr); // if USE_COREGRAPHICS_RENDERING is set, the CoreGraphicsImageSimple class should have been used.
        
        CGContextDrawImage (cgImage->context, CGRectMake(0, 0, image.getWidth(), image.getHeight()), loadedImage);

        CGContextFlush (cgImage->context);
        
#if ! JUCE_IOS
        CFRelease (loadedImage);
#endif
        
        // Because it's impossible to create a truly 24-bit CG image, this flag allows a user
        // to find out whether the file they just loaded the image from had an alpha channel or not.
        image.getProperties()->set ("originalImageHadAlpha", hasAlphaChan);
        return image;
    }
    
    return Image::null;
}

This is really interesting, but would need quite a bit of redesigning to become part of the library.. For example base classes like AudioIODevice mustn't contain any platform-specific stuff, it would need to be handled more generically... But thanks for sharing - this is a great start!

Wow I haven't looked at this thread for a while! glad to see you guys have made some progress and have built on and improved what I posted. I'm adding a to-do to come check through this code for my next update. 

Has anyone managed to get Inter-App Audio MIDI support working? Some apps like Tabletop and Cubasis can use IAA to send MIDI, which is something I haven't managed to figure out yet. Supporting that offers some nice benefits in Cubasis for example (you can use apps a bit more like you would use a plug-in). 

Hi ndika,

I made it work, in my audiobus init method I juce call this code to register the callback (you need audioUnit to do that):
I assume you can put this code in juce too if you don't have audioUnit. (you can look on my comments above how I pass it out of juce)
 

    AudioOutputUnitMIDICallbacks callBackStruct;
    callBackStruct.userData = yourPluginPtr;
    callBackStruct.MIDIEventProc = MIDIEventProcCallBack;
    callBackStruct.MIDISysExProc = NULL;
    AudioUnitSetProperty (audioUnit,
                          kAudioOutputUnitProperty_MIDICallbacks,
                          kAudioUnitScope_Global,
                          0,
                          &callBackStruct,
                          sizeof(callBackStruct));

  return self;
}

 

And below it, I implemented the callback C function, you can use "userData" to send the instance of your plugin or somewhere else, it depends on how your code is.

Personnally, "onMidiDispatch" method is also called after juce callback to buffer the midi. I do a "shortcut".

void MIDIEventProcCallBack(void *userData, UInt32 inStatus, UInt32 inData1, UInt32 inData2, UInt32 inOffsetSampleFrame)
{
  yourPlugin *engine = (yourPlugin*)userData;
  
  byte byte1 = inData1;
  byte byte2 = inData2;
  byte channel = inStatus & 0x0F;
  byte status = inStatus;
  status &= 0xF0;
  plugin->onMidiDispatch(0, channel, status, byte1, byte2, inOffsetSampleFrame);
}

I hope this will help you,
Bastien

Thanks Bastien! I will take a look at this when I have a moment and try to get it working for me. 

Hi, 

there is any (official) way to actually get Audiobus 2 working in an iOS project ? What's the exact procedure from scratch starting from a working Juce iOS audio plugin and using XCode 7 / iOS 9 ?

Thank you