Reading and playing audio from soundcard

Hello… i have searched into this forum for an easy tutorial for reading signal from my soundcard and output it to the pc speakers, but I didn’t find any “tips for dummies” about how to do this.

A very high level algorithm would look like the code below:

while(true){
buffer = readFromLineIn();
doSomeProcessing(&buffer);
play(buffer);
}

Does anyone have a sample code which performs this task?

Thanks!

The JuceDemo example application has code in AudioDemo.cpp that reads from input, which may or may not be line in. It doesn’t play the input (probably because many people have Stereo Mix selected as the recording input, which would cause horrible feedback), but the code to do so should be there.

Thanks, Warmonger.

Now I’m trying to understand the AudioDemo.cpp code, since it has much more features than I need. Now I’m looking for the function which gets the audio signal from soundcard then sends it to the output.

I didn’t understand the flow of actions which make an input audio signal to be played on my speakers.

What do I need to instantiate a “sound processor”? Reading and re-reading the AudioDemo.cpp code, I presume that I need to perform the following steps:

  • AudioDeviceManager audioDeviceManager; // instance declaration
  • audioDeviceManager.initialise (1, 2, 0, true); // sets start parameters
  • audioDeviceManager.setAudioCallback(this); // sets a callback object that will handle audio events.

The setAudioCallback function requires an AudioDeviceIOCallback argument, which is a virtual class that requires some functions to be implemented in its subclass. Apparently, the function which handles audio information is:

AudioIODeviceCallback::audioDeviceIOCallback ( const float ** inputChannelData,
int totalNumInputChannels,
float ** outputChannelData,
int totalNumOutputChannels,
int numSamples
)

Now I’m trying to discover how can I handle these parameters to do sound processing. Tips are welcome!

I’m not sure I can explain it any better than the JUCE documentation does. Just search for the AudioDeviceIOCallback function and read the description of the parameters.

Thanks for the answer, warmonger. I already had read the documentation about AudioDeviceIOCallback object and at this time i have some gaps in understanding on how it works.

In the example code there are three kinds of audio sources:

  1. midi input device;
  2. audio file;
  3. the default recording device (from windows mixer recording settings);

The third device (the one that i need to deal with) is sent to an object that gets the samples and prints them into a waveform. It’s not played like the 1st and 2nd audio sources.

In the function below I could get some information about how to “bind” an input (an audio file, in this case) and set it to the output device:

[code]void filenameComponentChanged (FilenameComponent*)
{
// this is called when the user changes the filename in the file chooser box

    File audioFile (fileChooser->getCurrentFile());

    // unload the previous file source and delete it..
    transportSource.stop();
    transportSource.setSource (0);
    deleteAndZero (currentAudioFileSource);

    // create a new file source from the file..

    // get a format manager and set it up with the basic types (wav and aiff).
    AudioFormatManager formatManager;
    formatManager.registerBasicFormats();

    AudioFormatReader* reader = formatManager.createReaderFor (audioFile);

    if (reader != 0)
    {
        currentFile = audioFile;

        currentAudioFileSource = new AudioFormatReaderSource (reader, true);

        // ..and plug it into our transport source
        transportSource.setSource (currentAudioFileSource,
                                   32768, // tells it to buffer this many samples ahead
                                   reader->sampleRate);
    }

    updateButtons();[/code]

In the call

transportSource.setSource (currentAudioFileSource, 32768, // tells it to buffer this many samples ahead reader->sampleRate);

the transportSource is set to an audio file. I think I could change it to the default recording device, then I could get the input device to be played (in real time, i hope). I presume that I need the soundcard input to be a PositionableAudioSource. Is it possible? How?

Honestly, I’m not very experienced with the audio subsystem in JUCE. But here’s what I came up with for a class that simply echoes mono input to all outputs:

[code]class ContentComponent : public Component, public AudioIODeviceCallback
{

AudioDeviceManager audioManager;

public:
ContentComponent()
{
audioManager.initialise(1,2,0,true);
audioManager.setAudioCallback(this);
}

~ContentComponent()
{
	audioManager.setAudioCallback(0);
}

void audioDeviceIOCallback(const float** inputChannelData, int totalNumInputChannels, float** outputChannelData, int totalNumOutputChannels, int numSamples)
{
	for(int i = 0; i < numSamples; i++)
	{	
		for(int out = 0; out < totalNumOutputChannels; out++)
		{
			if(outputChannelData[out] != 0)
			{
				outputChannelData[out][i] = inputChannelData[0][i];
			}
		}
	}
}

void audioDeviceAboutToStart(double sampleRate, int numSamplesPerBlock)
{
}

void audioDeviceStopped()
{
}

};[/code]

Seems to work fine for me, albeit with a fair amount of latency.

Yes! It works. Thanks for your patience! This code will really help me a lot. The latency is a bit annoying… can you make any suggestion about how to minimize it?

I want to apply a pitch detection algorithm on the signal (the signal is monophonic) and after output it to a midi device. This is a college work, so performance is not too critical, but would be nice if the latency was little, like 50ms :stuck_out_tongue:

No trouble at all, I’m learning along with you. :slight_smile:

To reduce latency, I added a AudioDeviceSelectorComponent to the ContentComponent, and set the latency through the UI. The only problem is that I can’t get input working when I select an ASIO driver. The DirectSound driver works fine though.

Nice. Now I’m cleaning up my source code. I will try to attach the AudioDeviceSelectorComponent to the ContentComponent class as well as create a basic UI. I hope I can get ASIO working here.

Where in the ContentComponent class I have to instantiate the AudioDeviceSelectorComponent?

I put this into the ContentComponent constructor, but it is not shown on the screen.

ContentComponent()
   {
	  deviceSelector = new AudioDeviceSelectorComponent(audioManager, 1, 2, 1, 2, true);
      audioManager.initialise(1,2,0,true);
	  
      audioManager.setAudioCallback(this);
	  
   }

It’s a component, so you just add it to the ContentComponent with this->addAndMakeVisible(deviceSelector), and set the device selector’s size with deviceSelector->setSize(width,height).

Regarding ASIO, you’ll have to download the ASIO SDK from Steinberg, configure your IDE so that it points to the ASIO SDK path, and rebuild Juce with JUCE_ASIO defined (you can uncomment the #define in juce_config.h for this). My earlier issue turned out to be an audio interface problem, not related to Juce.

I adapted the classes from the Juce Tutorial by haydxn to my program. In my implementation, the ContentComponent is not acting like a component, it’s just a base class.

So I removed the “extend” to Component to turn your ContentComponent class into just a subclass of AudioIODeviceCallback. Moreover, I renamed it to SoundProcessor (that is, this class will only do sound processing and will not be a widget).

To make the AudioDeviceSelectorComponent visible, I put this code at the MainComponent constructor (in MainComponent.h provided with Juce Tutorial):

	MainComponent()
		:	Component( T("Main Component") )
	{

		soundProcessor = new SoundProcessor();
		deviceSelector = new AudioDeviceSelectorComponent(*soundProcessor->getAudioManager(), 1, 2, 1, 2, true);
		button1 = new TextButton (T("Button 1"));
		button2 = new TextButton (T("Button 2"));
		button3 = new TextButton (T("Button 3"));
		slider = new Slider (T("Slider"));
		label = new Label (T("Label"), T("text here"));
		

		addAndMakeVisible (button1);
		addAndMakeVisible (button2);
		addAndMakeVisible (button3);
		addAndMakeVisible (slider);
		addAndMakeVisible (label);
		addAndMakeVisible (deviceSelector);

		label->setBounds (10, 10, 280, 20);
		slider->setBounds (20, 40, 260, 20);
		button1->setBounds (20, 70, 260, 20);
		button2->setBounds (20, 100, 260, 20);
		//button3->setBounds (20, 130, 260, 20);
		deviceSelector->setBounds(20,130,260,500);

		button1->addButtonListener (this);
		button2->addButtonListener (this);
		button3->addButtonListener (this);
	}

About the ASIO issues: i have the ASIO 4 All installed in my PC and my program worked well when I enabled it through the “off-line control panel” icon at my desktop.

I do not intend to learn the ASIO API at this time. I just want to get the ASIO driver optimizations. Now, in my tests, I didn’t perceive any important latency problem. Finally I have the basic environment to do sound processing.

At now, thanks for your help! Maybe I’ll need it again when I try to ouput to the MIDI interface.


Ooops… I didn’t understand your explanation about ASIO. You said that if I compile the Juce Library with ASIO enable, I will get some ASIO optimizations “for free”, without requiring me to deal with the ASIO api?

If it’s true, that would be good… sorry, I’m not experienced with ASIO functioning, I just have the ASIO 4 All installed and hope that it will give me some sound performance improvement :stuck_out_tongue:

ASIO’s not an “optimisation”, it’s a whole different set of audio drivers. Juce has support for them, but to compile it you need to install the ASIO SDK so that the header files are available. That’s why it’s disabled by default. Have a look in Juce’s ASIO code files for more info.

Ah… Thanks for the answer, Jules.

Now I’m thinking in the ASIO4All case… it works above WDM… but if the ASIO4All software accesses the audio hardware through WDM, why WDM do not do it itself using the best methods? Is it a kind of a fool driver which need some smarter software that do things intelligently?

I’ve just got the ASIO to work. The process is far simple. As you know, I had the ASIO4All (http://www.asio4all.com/) installed. However, ASIO option did not appear on the device listing into AudioDeviceSelectorComponent. To get it to be shown on the list, I had to (thanks to Warmonger and Jules help):

  • Download ASIO SDK at the Steinberg homepage:
    http://www.steinberg.net/329+M52087573ab0.html

  • Recompile Juce with ASIO (uncomment the define in juce_Config.h):
    #ifndef JUCE_ASIO
    #define JUCE_ASIO 1
    #endif

  • Recompile your app and that’s all.

There was a REALLY considerable latency diminution. Nice!

Hi br_programmer I’m just fiddling around with your code how is your function set up for getAudioManager() in the line

Hello, jayman.

I implemented getAudioManager() in my class SoundProcessor. The code is shown below:

// SoundProcessor.h: interface for the SoundProcessor class.
//
//////////////////////////////////////////////////////////////////////

#ifndef _SOUNDPROCESSOR_H_
#define _SOUNDPROCESSOR_H_

#include <juce.h>

class SoundProcessor : public AudioIODeviceCallback
{
	AudioDeviceManager audioManager;

public:
	SoundProcessor();
	~SoundProcessor();
	AudioDeviceManager* getAudioManager();
	void audioDeviceIOCallback(const float** inputChannelData, int totalNumInputChannels, float** outputChannelData, int totalNumOutputChannels, int numSamples);
	void audioDeviceAboutToStart(double sampleRate, int numSamplesPerBlock);
	void audioDeviceStopped();

};

#endif

The implementations are shown below:

// SoundProcessor.cpp: implementation of the SoundProcessor class.
//
//////////////////////////////////////////////////////////////////////

#include "SoundProcessor.h"

//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////

SoundProcessor::SoundProcessor()
   {
      audioManager.initialise(1,2,0,true);
	  
      audioManager.setAudioCallback(this);
	  
   }

AudioDeviceManager* SoundProcessor::getAudioManager(){
	return &audioManager;

}

SoundProcessor::~SoundProcessor()
   {
      audioManager.setAudioCallback(0);
	  
   }

void SoundProcessor::audioDeviceIOCallback(const float** inputChannelData, int totalNumInputChannels, float** outputChannelData, int totalNumOutputChannels, int numSamples)
   {
      for(int i = 0; i < numSamples; i++)
      {   
         for(int out = 0; out < totalNumOutputChannels; out++)
         {
            if(outputChannelData[out] != 0)
            {
               outputChannelData[out][i] = inputChannelData[0][i];
            }
         }
      }
   }

void SoundProcessor::audioDeviceAboutToStart(double sampleRate, int numSamplesPerBlock)
   {
   }

void SoundProcessor::audioDeviceStopped()
   {
   }

I hope it can answer your question.

Hi!

I tried your code with my wav player code hoping I can mix a recorded input with the wav file. But for that I need to attach a new transport source to MixerAudioSource and that would need a PositionableAudioSource just like your original query. Were you able to figure out how to do it that way?