My name is Daire, I just joined the forum. I would like to develop a simple multitrack recorder using Juce. I don’t need any effects or DSP, just the ability to record, playback and loop up to 4 WAV files. However I have only basic C++ experience, and have never programmed for audio before. Is it reasonable that I could use the Juce libraries to develop this multitrack application, and how difficult would it be for a beginner like myself? I really want to learn how to start this project so any comments or suggestions would be greatly appreciated.
There’s no reason why you shouldn’t be able to do that.
Since you’re new to JUCE and somewhat new to C++, you might want to spend some time experimenting with the individual components you’ll need to use before starting your actual project.
Typically the doing of things is not hard, the trouble starts when bringing things together with a user interface. An ill thought out design, or inexperience with a component (visual, messaging, or audio) can quickly turn what seemed like a decent application design into a nasty mess that’s hard to maintain or extend.
Why not just try to get a hardcoded audio file to play through JUCE’s audio classes first. Once you’ve got that working, a simple single tap delay effect on live audio will cover pretty much everything you need to know about recording, buffering, and streaming audio.
As a side project, start working with the user interface components. Don’t try and write an actual UI for your app though until a) you’re comfortable with getting messages cleanly between components and your main code, and b) you know how your audio code is going to work and what it needs to pass too and from the user interface. Otherwise you’ll just end up hacking and rewriting code, which is frustrating, and not a useful, or fun way to learn coding.
I am definitely an inexperienced noob. Just need to ask some questions about audio stuff. I’m trying to create a metronome using a timer and looping a wav file. However, whenever the timerCallback is called the whole thing crashes. There just seems to be so many classes associated with audio I don’t know what I need to do a simple thing like play audio. I have this in the callback function:
File audioFile = T(“Click03”);
AudioFormatReader* reader = formatManager.createReaderFor (audioFile);
currentAudioFileSource = new AudioFormatReaderSource (reader, true);
It may seem like a mess to everyone and I’m sure it is but i’ve more or less got lost with the code.
Not enough info. I’d need to see your actual callback code, as well as details on when/how formatManager and transportSource are being instantiated.
From the code you’ve provided above, you would appear to be loading the sample each time the callback fires, and transportSource would appear to be a global, so threading issues would be likely.
Regarding the previous posted code:
[code]currentAudioFileSource = new AudioFormatReaderSource (reader, true);
transportSource.setSource (currentAudioFileSource, 32768, reader->sampleRate);
Assuming transportSource is an AudioTransportSource, there’s no AudioIODeviceCallback derived class to stream the samples in the audio file source to. You need something like the following:
currentAudioFileSource = new AudioFormatReaderSource (reader, true);
AudioSourcePlayer* player = new AudioSourcePlayer();
transportSource.setSource (player, 32768, reader->sampleRate);
Just one thing to confuse you even more - the Timer class is really a UI class, and is way too inaccurate to trigger a metronome, even a simple one.
You’d never use any kind of separate timer object for a metronome - the ticks need to begin at the correct sample in the output, so your audio processing routine needs to count samples, and introduce the sample at exactly the right point, not just at the start of a buffer, as you’d get by just triggering it from another thread. I’d suggest using a Sampler audio source, and writing your own audio source to feed it midi events with exactly the right timestamp to trigger the sounds.
thank you very much jules! maybe this post of you gave me just the hint i was looking after for weeks. since a few weeks i’m writing a little drumcomputer or-something-alike in my freetime and i need good timing for this:
a wav files are loaded and parts of it are played back at different positions (switchable by a radiogroup) in a loop.
right now i try to handle this by a thread. in the run() method of the thread i check which playing position is activated and use the setPosition method of the audioTransportSource. But the timing is not perfectly stable. i’m having short and little glitches. i thought this may be due to the actual audiobuffersize differing? but maybe the thread itself is not the suitable way of triggering with a solid timing.
if counting samples is the right way of doing it, where would be a point of counting (in the audioSourcePlayer?) and … how? (noob question sorry).
i did not have a closer look to the sampler classes yet because i thought i could solve all my problems with an audioTransportSource; with a sampler i could not change the playback position of the wave file, right?
Yes, using a separate thread is completely the wrong approach. You need to actually do all your timing and processing in the audio process callback itself. But like I say, using a sampler takes away the really nasty issues of mixing together all the active sounds, and you can just give it events to tell it when to start and stop. But you still need to create those events in the audio callback.
thank’s! i’ll try to do it this way.
a further question:
it seems to me that all of the processing functions like the audioIODeviceCallback of the audioSourcePlayer or the getNextAudioBlock of the audioSource are block based. so to trigger my sounds in these functions i’ll be limited to the blocksize anyway (in terms of time accuracy spoken)? how can i achieve sample-accuracy?
by starting your sample part of the way along a block!
I’ve taken the advice of using a sampler but how do I send midi messages to trigger it without an onscreen keyboard?
I understand that this is probably a stupid question and I’m wondering if there’s a level below “uber weenie” that can effectively convey my inexperience with OO programming
by starting your sample part of the way along a block![/quote]
i tried this by using simple positionable audio transport sources (no sampler yet). so i wrote a class that contains two of these (which shall play in parallel), a mixer, and is derived from audiodevieIOcallback. in the callback i count the samples, and compare this count to the number of samples in an 8th note (this is the quantization i want to use). if the count is still smaller than this number, i call the audiodevicecallback of the audioSourcePlayer. but if the count will reach the number of samples of an 8th note, i call it twice:
once with the numSamples value equals how many samples still missing till completion of the 8th note, then i do my stuff (starting, stopping, setNextReadPosition of my transport sources), then call the audioSourcePlayers audioiodevicecallback the second time, this time with the rest of the samples still needed in this block.
but i get a small crackling noise each 8th note when playing this. i’m not that surprised that this happens (would have been too easy!) but i’m wondering why in fact this happens and what can i do against this?
i guess i need some prebuffering of some kind? (i opened the audio files with prebuffering of 30000somewhat samples, like i have seen in the juce audio demo, but i guess this is not what i need for my … unusual way of playback?)
i’m still new to all this, so thanks for any advice
Ok, ok… You’re clearly all floundering a bit here!
Look in the demo code at the SynthAudioSource class - see how it’s got a getNextAudioBlock() method? Right, in there you can see that the messages get pulled off the midiCollector object. The on-screen keyboard has been adding messages to the midiCollector, but you can also add them directly yourself by calling midiCollector.addMessageToQueue(). If you do so at the start of the getNextAudioBlock method, you can timestamp your messages with the sample number you want them to start at, and the synth will take care of all the polyphony and other messing about.
thanks for your hint. i see that this is probably the best approach. i started implementing it but i’m a little bit unsure about the timestamp of the midi messages, i will produce. is it simply the number of samples in the future i want the note to be played (this is how i understood you) or is it timebased (as e.g. in the midiInputCollector documentation. here they seem to be timestamped with the time of arriving)? i’m still browsing the commentary in the midi classes but they say “timestamp is application specific” or something :).
Yes - the timestamp in this case is the number of samples along the audio buffer that’s getting filled.
me again (and i hope you’re not tired by my neverending questions…).
i did some work and tried around for hours with the midi messages and adding them to the MidiMessageCollector qeue, but obviously i still was not able to give them a correct timestamp. Each note i add to the collector starts playing immediately. So i tried both: giving it the Timestamp samplesToStart (which is the position in samples in the buffer i want it to start) or by getMillisecondsHiRes + samplesToStart/44100 (which is what i saw in the implementation of the midi message collector). but nothing works.
nevertheless, i did manage to add timestamped (with samplesToStart as timestamp) messages directly to the midiBuffer. this works fine, but it’s no proper solution because some time later i will want to add messages which are further away in the future than the buffersize.
probably i’m doing something significantly wrong with the midiMessageCollector. (i checked the forum and the audioDemo for several times, but the demo only uses the keyboard component, which by design starts playing immediately, so no hint for me there ).
btw. if that’s helpful: i use the juce 1.46 release.
thank you again.
its wierd. Ive gone back to this trying to sort it out but i keep getting the same problem. I have an audio file of a beep i want to use as a sound option for a metronome. I count samples in my audio callback and then use addMessageToQueue() to a MidiBuffer at a certain sample. I use a MidiMessage with a noteOn, channel1 byte, a noteNumber byte (60, the fundamental note of the sample), and a velocity of 70.
void Metronome::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill)
message = new MidiMessage(144,74,100);
midiCollector.removeNextBlockOfMessages (incomingMidi, bufferToFill.numSamples);
synth.renderNextBlock (*bufferToFill.buffer, incomingMidi, 0, bufferToFill.numSamples);
No sound comes out but i know it’s counting. Am I going about this the wrong way or is it a simple problem?
Looks like a very basic programming error to me. Try stepping through your code in the debugger and it should be blindingly obvious…