Issues With getSharedAudioDeviceManager

I was trying to add simple recording functionality to an application I’m developing. I am currently using the ‘AudioRecordingDemo.h’ as guide to make this. I had an issue when declaring the audioDeviceManager (code here:
AudioDeviceManager& audioDeviceManager {getSharedAudioDeviceManager (1, 0) }; )
as getSharedAudioDeviceManager was not declared. I found the code for getSharedAudioDeviceManager on the JUCE GitHub page and added the functions for ‘getCurrentDefaultAudioDeviceName’ and ‘getSharedAudioDeviceManager’ to my project. I currently have an issue with this section of code from the ‘getSharedAudioDeviceManager’ function:

if (numInputChannels > 0 && ! RuntimePermissions::isGranted (RuntimePermissions::recordAudio))
            RuntimePermissions::request (RuntimePermissions::recordAudio,
                                         [numInputChannels, numOutputChannels] (bool granted)
                                             if (granted)
                                                 getSharedAudioDeviceManager (numInputChannels, numOutputChannels);
            numInputChannels = 0;

specifically the line right under if (granted). I get the error message, “‘this’ cannot be implicitly captured in this context”. Would anyone know of any way to resolve this issue? Thanks in advance!

If you look at line 297 of the AudioRecordingDemo.h file, you’ll see the following lines:

 // if this PIP is running inside the demo runner, we'll use the shared device manager instead
 AudioDeviceManager audioDeviceManager;
 AudioDeviceManager& audioDeviceManager { getSharedAudioDeviceManager (1, 0) };

You can see from the comment that the getSharedAudioDeviceManager() method should only be used when the example is running in the DemoRunner application (found in examples/DemoRunner). When you are running this demo by itself, JUCE_DEMO_RUNNER won’t be defined and therefore only the AudioDeviceManager audioDeviceManager; line will be compiled.

You can safely remove any references to getSharedAudioDeviceManager() and just use the declared AudioDeviceManager object.

1 Like

Thanks for your reply ed95. I made the appropriate changes and the code compiles now. However, when I record the files that are saved to my documents do not have any sound on them, although they are the same length as the time spent recording. The liveAudioScroller too does not display any waveforms. Do you know why this may be?

Just as a follow up to my previous question, I tried creating a separate project just to create the simple recording application, using only the code from the demo runner and I experienced that same thing. The liveAudioScroller remained blank and the .wav files generated had no audio. The issue may be because I’m on a Mac running Xcode 10.

Does your program actually finish?
A common problem when writing wav files is, that the length is written in the header. Only once the writing finished, and the WavAudioFormatWriter is properly destructed, the actual length is written into the file header.
If your file shows length 0, but contains data, that means, that either:

a) your program was interrupted/killed or
b) you leaked your writer

Hope that points you to the right direction…

1 Like

I believe that the program finishes because the length of the files recorded is not 0. I used an online tool to read the metadata of a file recorded using the recording application and there it showed that the duration was 1.6 secs (which I was about how long I recorded for). I believe the issue may be with the microphone not being accessed by the code because the liveAudioScroller also doesn’t show anything when the program is running.

Sorry, I misunderstood.
You can DBG the RMS of the block when writing: AudioBuffer::getRMSLevel() so you know, if the buffer was actually silent.

And if you are on Mojave, I read that you need to ask OS permission to access the microphone, similar to iOS. There is a switch in the XCode exporter.

1 Like

Thanks a lot for the help daniel. I just wanted to clarify what you meant by a switch in the XCode exporter. Is this a Projucer feature or a feature in XCode? I tried looking it up but I can only find how to do this in Xcode for iOS applications.

Make sure this option is enabled in the Projucer and re-save your project:

You’ll then get a dialog asking if you want to allow microphone access when your app starts and, after confirming, you should be able to record audio.

1 Like

I have an old project that diverged greatly from the one generated by ProJucer.

How do I enable this manually?

1 Like

I don’t know if that is all it does, but it adds this key to the Info-App.plist:

    <string>This app requires audio input. If you do not have an audio interface connected it will use the built-in microphone.</string>

It kinda works, meaning that (macOS is Mojave):

  • if I compile with Xcode 9 and launch the app by double clicking it, it works.
  • if I compile with Xcode 10 and launch the app in the debugger, it works.
  • if I compile with Xcode 9 and launch the app in the debugger, it does not work, i.e. it does not prompt for mic access and the audio input is silent

The problem is that being an old app, we still need to provide support for 32 bit for historical users and are therefore stuck with building it with Xcode 9.
I know that I can manually launch it and then attach the debugger later, but that is tedious and does not catch asserts upon loading.

Any idea on how to make it work even when it is launched from within Xcode 9 for debugging?

Did you ever figure this out?



Not really: the final choice was to continue development with Xcode 9 on a machine with 10.13 for a few months, until everything was upgraded to targeting 64 bits only, which meant Xcode 10 with 10.14, which averted the problem

1 Like