Real-time Rendering - Tweaking while recording

Hi all!

My use case is a record button that let you record audio while tweaking knobs of a synth or effect.
This is not possible with Renderer::renderToFile (even when using Renderer::Parameters::realTimeRender), as rendering is not allowed whilst attached to audio device.

Does anyone have any advice for my use case, or example of code to do this?

Thanks very much for any help!

You can’t really do this whilst rendering, it’s an inherently offline process.

The way you’d normally do this is by recording the output of one track to another. To do this, get the use auto inputDevice = AudioTrack::getWaveInputDevice() on the source track and then assign that to a target track.

It’s a bit tricker with TrackInputDevices as you have to get them to be added to the state first before you can get an instance (this should probably be improved in future).

                edit.getEditInputDevices().getInstanceStateForInputDevice (*inputDevice);

                if (auto epc = edit.getCurrentPlaybackContext())
                    if (auto instance = epc->getInputFor (inputDevice))
                        instance->setTargetTrack (destTrack, 0, true);

Another approach is to simply record the parameter changes as automation and do the rendering afterwards.

1 Like

Thank you Dave.
I updated my code after your remarks. But it still doesn’t work as expected, I’m probably doing something wrong. Here is my code:


	te::InputDevice& inputDevice = track1->getWaveInputDevice();

	if (auto epc = _edit.getCurrentPlaybackContext())
		if (auto sourceTrackInputDeviceInstance = epc->getInputFor(&inputDevice))
			sourceTrackInputDeviceInstance->setTargetTrack(*track2, 1, true);

void Gui::render()
	_edit.getTransport().stopAllTransports(_engine, false, true);

	File renderFile{ };

	File rendersDir{ File::getSpecialLocation(File::userDesktopDirectory) };
	juce::FileChooser fileChooser("Choose a location to save file...", rendersDir, "*.wav");

	if (fileChooser.browseForFileToSave(true)) {
		renderFile = fileChooser.getResult();

	te::EditTimeRange range{ 0.0, _edit.getLength() };
	juce::BigInteger tracksToDo{ 0 };
	WavAudioFormat wavformat;
	te::Renderer::Parameters renderParams{ _edit };
	renderParams.engine = &_engine;
	renderParams.destFile = renderFile;
	renderParams.audioFormat = &wavformat;
	renderParams.time = range;
	renderParams.tracksToDo = tracksToDo;
	renderParams.realTimeRender = true;
	te::Renderer::renderToFile("Render", renderParams);
	AlertWindow::showMessageBoxAsync(MessageBoxIconType::NoIcon, "Rendered", renderFile.getFullPathName());

Thanks a lot for your time Dave!

Sorry, you can’t use the Renderer to do this track-to-track process, you need to arm the destination track and then do a TransportControl::record() operation, just like you would for live audio/MIDI input.

Ok, thank you.

Just to make sure I’m going in the right direction, is it possible to:

  1. Record the output of the input track to the destination track using getWaveInputDevice() and setTargetTrack(destTrack, 0, true)
  2. Arm the destination track and do a TransportControl::record() operation
  3. Render the destination track to a file using Renderer::renderToFile with tracksToDo.setBit(destinationTrack)?

Thanks a lot for your time Dave

Yes those are all true, but step 3, the render, needs to happen after the recording has stopped. I’m not quite sure why you would need this step though as the “destination track” will contain a clip with the recorded file.

What additional processing would the “render” operation add?

1 Like

I have no additional processing in the “render”.
How can I get the file saved with the clip on the “destination track”?


1 Like

Is the clip created automatically during recording? Here is my test code, with line 61 returning 0:
MainComponent.h (5.8 KB)

ok it works when I call “;” before configuring the output from one track to another :slightly_smiling_face:

It works perfectly, thanks again!
Is it possible to merge recorded clips or input tracks, to have only one final .wav file?

That’s when you’d do an offline render operation like you were originally.

1 Like

Thank you Dave, it works perfectly.

Two small feedbacks:

1- In tracktion_Renderer.cpp, line 430, I think you need to add this line: r.tracksToDo = tracksToDo;

2- When I render to a file, I must have this line, otherwise the file is not updated:

	if (fileChooser.browseForFileToSave(true)) {
		renderFile = fileChooser.getResult();

		if (renderFile.existsAsFile())
			renderFile.deleteFile();   // added line


Thanks, will add then in a min.

I think that’s correct behaviour. I don’t think it’s the Renderer’s responsibility to delete the file. You should make sure the file doesn’t exist and the location is writable before you start rendering.

1 Like

This is what I do for the render file.

File renderPath{ te::EditFileOperations(*currentEdit).getEditFile().getParentDirectory() };
File renderFile{ renderPath.getChildFile("Render").getNonexistentChildFile("render", ".wav") };

This creates a “render.wav” file in the “Render” folder with the edit. If the file already exists, it appends a number, i.e. - “Render(1).wav”.

1 Like

Thank you for your replies.

Hi Dave

I tried recording the output of one track to another, and getting the file with AudioClipBase::getSourceFileReference().getFile(), but the resulting .wav is very saturated when there are overlapping notes.

Here is my TE 2.0 test code:
MainComponent.h (3.3 KB)

Thanks very much for any help!

Apologies for the delayed reply, I’ve been on holiday for the last two weeks.

Are you sure this isn’t just a gain staging issue? I can’t remember how 4OSC starts but does the real-time audio playback sound the same?

No problem Dave, thank you for your help!
The real-time audio playback is fine.
It’s only the rendered wav that is saturated.
Normally, with the code I uploaded, it’s quick to reproduce.

Hi @dave96
Any idea how to solve this?
Thanks in advance!