AudioSampleBuffer speed ups?


#1

Hey Julian, any interest in using Accelerate Framework on the mac? I’ll probably patch it myself at some point and see what improvements it makes…

[code]
void FloatFunctions::clear(float input,int blockCount)
{
#if JUCE_MAC
vDSP_vclr(input,1,blockCount);
#else
memset(input,0,blockCount
sizeof(float));
#endif
}

void FloatFunctions::copy(float *input,float output,int blockCount)
{
#if JUCE_MAC
vDSP_mmov(input,output,blockCount,1,blockCount,blockCount);
#else
memcpy(output,input,blockCount
sizeof(float));
#endif
}

void FloatFunctions::sum(float *input1, float *input2, float *dest, int blockCount)
{
#if JUCE_MAC
vDSP_vadd(input1,1,input2,1,dest,1,blockCount);

etc[/code]

I’ve been playing around with the AudioProcessor class and have made an AudioReader that works with that. Now I was going to make a sample rate converter class(also as an AudioProcessor), but as the ProcessorGraph seems to process blocks of data(as opposed to pulling data through the processor objects) i’m stuck thinking i’ll need to make my samplerate converter hold the AudioReaderProcessor itself, and throw that object(only that object) into the graph. (So that when the graph asks for 512 samples, but I need to pull 1078 samples from the soundfile buffer to samplerate convert down to 512, that this is possible)

The other option would be to add “node’s” to an AudioProcessor subclass that would call through to connected processors to acquire data. (Sort of the AudioUnit approach) (You put a sample rate converter, a mixer into a graph, connect the nods of the audio units, tell the graph to start, it renders the top object who asks its connected audiounit for data, who asks its connected audio for data, and so on, and so…so the last item on the chain is the provider, and he’ll be asked to satisfy the frames needed to keep everyone above him happy)

Does either of this sound like the right approach? (I’m leaning towards the second approach, but that would pooch me if you came up with cool AudioProcessors, as I’d only be able to chain up node’s from MY derivative class!) (I hope I explained all that ok!)


#2

Good idea about the vdsp stuff. I remember holding out on adding that because it was only available in 10.4 (?) onwards, but it’d certainly be worth putting in there now.

…And yes, sample rate converting is the tricky bit about the processor graph architecture, and it’s the one bit I’m not sure about how to solve. Your node idea sounds like just the same way that audiosources work now, and sure, you could put a node/audiosource into a graph that uses its own connections to get its input data, but the whole clever bit about the graphs is the way that the connections and data buffers are managed by the graph rather than by the nodes themselves, so that would kind of defeat the point of having graphs.

One way I thought it might be possible to avoid the problem would be if the graph was asking for a block in terms of a length in seconds rather than samples. Then it’d need some complicated extra code so that the buffers it creates to hold the data passing between nodes would be labelled with a sample rate too, but the whole thing becomes very complex!


#3

Eeeek! Especially in the case of something like a pitch slider, that sample rate is going to be constantly shifting / changing!

Its a bit of a head scratcher… i’m sure i’ll go down the wrong path more than once, it always seems to be the way!

tnx


#4

Just thought i’d post I ended up using AudioSources and just wrote an AudioProcessor class that wraps the source. Super simple and works well for me!

.h

class SMAudioSourceProcessor : public AudioProcessor
{
public:
	SMAudioSourceProcessor (AudioSource* const inputSource,const bool deleteInputWhenDeleted);
	virtual ~SMAudioSourceProcessor ();

	void prepareToPlay (double sampleRate,
						int estimatedSamplesPerBlock);
	
	void releaseResources();
	void processBlock (AudioSampleBuffer& buffer,
					   MidiBuffer& midiMessages);
	
	
	const String getName() const
	{
		return L"AudioSource wrapper";
	}
	
	const String getInputChannelName (const int channelIndex) const
	{ return String (channelIndex + 1);}
	
	const String getOutputChannelName (const int channelIndex) const	
	{ return String (channelIndex + 1); }
	bool isInputChannelStereoPair (int index)	const				{ return false;	}
	bool isOutputChannelStereoPair (int index)	const				{ return false; }
	bool acceptsMidi() const									{ return false; }
	
	bool producesMidi() const									{ return false; }
	
	AudioProcessorEditor* createEditor()						{ return 0; }
	int getNumParameters()										{ return 0; }
	const String getParameterName (int parameterIndex)			{ return String::empty; }
	float getParameter (int parameterIndex)						{ return 0.0; }
	const String getParameterText (int parameterIndex)			{ return String::empty; }
	void setParameter (int parameterIndex, float newValue)		{ }
	int getNumPrograms()										{ return 0; }
	int getCurrentProgram()										{ return 0;	}
	void setCurrentProgram (int index)                          { }
	const String getProgramName (int index)                     { return String::empty; }
	void changeProgramName (int index, const String& newName)   { }
	void getStateInformation (MemoryBlock& destData)			{ }
	void setStateInformation (const void* data,int sizeInBytes)	{ }
private:
	AudioSource* const input;
	const bool deleteInputWhenDeleted;
	/* data */
};

.cpp

SMAudioSourceProcessor::SMAudioSourceProcessor(AudioSource* const inputSource,const bool deleteInputWhenDeleted_) 
: input(inputSource),deleteInputWhenDeleted(deleteInputWhenDeleted_)
{
	
}
SMAudioSourceProcessor::~SMAudioSourceProcessor()
{
	if (deleteInputWhenDeleted)
		delete input;	
}
void SMAudioSourceProcessor::prepareToPlay (double sampleRate,
					int estimatedSamplesPerBlock)
{
	 input->prepareToPlay (estimatedSamplesPerBlock, sampleRate);
}

void SMAudioSourceProcessor::releaseResources()
{
	input->releaseResources();

}
void SMAudioSourceProcessor::processBlock (AudioSampleBuffer& buffer,
				   MidiBuffer& midiMessages)
{
	AudioSourceChannelInfo info;
	info.buffer=&buffer;
	info.startSample=0;
	info.numSamples=buffer.getNumSamples();
	
	input->getNextAudioBlock(info);
}

#5

So this way I can chain multiple AudioSources together (mixer,samplerate,file reader) and put the top level into this wrapper class.