Audio based drawing

Hey guys,
I am starting to dwell into the audio processing caverns, and I am not sure where I should be looking right now. I found this thread: http://www.rawmaterialsoftware.com/viewtopic.php?f=8&t=9780 which game me some general direction but I’m not sure how I should be using the classes.
Here’s the deal. I am doing some 3d rendering, and I would like the parts of the scene to be scaled depending on what the beat is. I mean, once I can get the info of the current sample, I could average, or get the highest float on the sample, etc and interpret that value as the scale. I’m not sure what the appropiate way to doing this:
READ THE FILE

      AudioFormatReader* reader = formatManager.createReaderFor (file);
        
        if (reader != nullptr)
        {
            currentAudioFileSource = new AudioFormatReaderSource (reader, true);
            
            // ..and plug it into our transport source
            transportSource.setSource (currentAudioFileSource,
                                       32768, // tells it to buffer this many samples ahead
                                       &thread, // this is the background thread to use for reading-ahead
                                       reader->sampleRate);
            
            transportSource.setPosition (0);
            transportSource.start();
        }

        // Start timer to check for the current progress of what I'm playing. (UGLY, is there another way???)
        startTimer (1000 / 30);

Then, I do this on my timer callback.


void OpenGLCanvas::timerCallback()
{   
    // Update playing audio bar
    if (transportSource.isPlaying()) {
        m_musicProgress = transportSource.getCurrentPosition() / transportSource.getLengthInSeconds();

        // Get the next audio sample and calculate the scale based on an average.
        AudioSourceChannelInfo info;
        transportSource.getNextAudioBlock(info);
        float* channelLeft =    info.buffer->getSampleData(0);
        float* channelRight =    info.buffer->getSampleData(1);
        
        float avg = 0;
        for (int i = 0; i<info.numSamples; i++) {
            avg += channelLeft[i] / (info.numSamples * 2);
            avg += channelRight[i] / (info.numSamples * 2);
        }
        printf("average: %4.4f", avg);
        m_scale = avg;
    }
}

So this crashes, and I bet it’s because I’m doing something terribly wrong and possibly annoying to juce as well. What would be the best way to implement this? Is there somewhere I can start reading for audio processing in general? How about juce specific?

I’ve read your post a few times now and I’m still unclear as to what you are asking. Your code just confuses me more :slight_smile:
Are you writing a plugin or a standalone app? Is it just drawing you’re doing, or are you streaming/playing back audio too?

I’m sorry it confused you :(. I’m making an application, not an audio plugin, but since this forum contains so many audio devs, I decided to post this here as I thought this would be a better fit than the General Forum.
At the moment, my application is just playing audio from a file. What I want it to do is make it so that while it’s playing back this audio, I’ll get information on the waveform of what’s being played, and this will modify my scene. I’ll give you an example of what I want to do:

So, reading up on audio processing documentation, what I think I need (and this is where I need help) is to somehow get the sample callback, which will give me a float array of what’s being played on a specific timeframe. I’ll use this float array to calculate how this will affect my scene.

I am not sure what classes I need to do this. I am currently using the TransportSource for playback, but I don’t know how to get the audio information frequently so that I can use this data to calculate my scene transformations. Does this make sense?

Take a look at the JUCE demo. That contains the basics of how to get at the samples. You probably want to combine the features in the playback demo (loading and playing a file) and the recording demo, specifically the live input display. You should be able to tell from that how to get basic sample data out of an audio stream and find peaks etc.

You sir, are a gentleman and a scholar. Thanks for shedding me the light on this, I’m finally able to do something! Thanks again man, really. :mrgreen: