How to Properly Use Renderer::renderToFile? Is Waveform’s "Render to File" the Same Thing?

Hi all,

I’m developing a DAW based on Tracktion Engine and have two practical questions about offline rendering:

  1. What’s the correct and robust way to use Renderer::renderToFile?
    The official docs are a bit sparse. When setting up the parameters (tracks, clips, plugins, time range, output file, etc.), are there any required or recommended practices? For example:
  • Any common pitfalls or order dependencies for setting tracksToDo, allowedClips, usePlugins, time range, etc.?

  • Are there canonical code snippets or best practices for this workflow?

  • Anything that’s often missed when building a reliable offline render pipeline with plugins and automation?

  1. Is Waveform’s “Render to File” menu operation just a call to Renderer::renderToFile, or are there important differences?
    On the surface, it seems like Waveform’s “Render to File” feature simply calls the same API, but are there additional steps or internal state management (like Edit/project context, plugin states, track management, automation, etc.) that Waveform handles before/after calling renderToFile?
  • Is there any official (or semi-official) code reference or call stack that shows how Waveform sets up and calls renderToFile?

  • Anything in Waveform’s implementation that’s critical to know if I want to reproduce the same “export” logic in my own app?

Would really appreciate any code-level or engineering insights from those who’ve implemented full offline export or worked directly with the Tracktion Engine source.
Thanks!

Waveform is a bit different, it uses the RenderOptions class which is higher level to set the properties and then we call RenderOptions::performBackgroundRender which does the rendering on the BackgroundJobsPool.

If I was doing it now though I’d use the newer EditRenderer::render API.
Have a look at the tracktion_Renderer.test.cpp file for an example.

Thanks for your reply!

Just to double-check—does EditRenderer::render actually create a new .wav file on disk for you?

The reason I’m asking is that when I use Renderer::renderToFile, it doesn’t generate any new audio file at all—the render path seems to have no effect.
That’s why I’m considering switching to EditRenderer::render.

Thanks!

I’d like to follow up with a further question:

To clarify further, could I directly use RenderOptions::performBackgroundRenderBackgroundJobsPool instead of EditRenderer::render or Renderer::renderToFile for offline export?

From my reading of the code, performBackgroundRenderBackgroundJobsPool seems to wrap a lot of “black-box automation”—such as job pools, auto-inserting the rendered file back into the Edit, undo/redo integration, and task management. It feels like the rendering flow and parameter/state changes become less transparent, which might make debugging or tracing issues harder if something goes wrong?

I also noticed that, under the hood, both performBackgroundRenderBackgroundJobsPool and the “manual” EditRenderer/Renderer workflows ultimately call Renderer::RenderTask::runJob(), which then calls renderAudio() or renderMidi() as appropriate, and from there the node graph and file writing logic (via NodeRenderContext::renderNextBlock()) is the same.
So, does this mean the core rendering execution is actually identical, and the main difference is just the level of automation/engineering integration?

In summary, my current understanding is:

  • EditRenderer::render / Renderer::renderToFile:
    Pure rendering utility functions. Suitable if I want full control over all parameters, threads, progress, callbacks, etc., and don’t want TE to manage project/task pool, undo, insert-into-project, or multi-task scheduling/UI.

  • RenderOptions::performBackgroundRender…:
    Full project-level automated export interface. TE manages all parameters, project state, scheduling, progress feedback, undo/redo, result insertion into Edit/Project, etc. Targeted at DAW-level engineering use cases.

Is this understanding correct? Is there anything critical I might be missing regarding robustness, debugging, or choosing between these two workflows?

Any further insight would be greatly appreciated!

Yes, everything ultimately goes through Renderer.

Your summary looks correct. If you’re not actually getting a file created, I suggest you step through the render process to see where it bails out. I’m guessing it’s not getting as far as NodeRenderContext::writeAudioBlock then the writer->appendBuffer line?