Render Example?

It was mentioned here a few weeks ago that a Tracktion Engine Render example was in the works. Bump! It will be great to see how to properly use the built-in render capability.

In addition, I find that 99% of the time all I really need to do is capture the main output to a file. It would be very instructive to see how to do that properly. I am thinking it must involve edit.getMasterVolumePlugin()->createAudioNode(); and memory mapped file handling in some fashion, but I have so far not worked out how to connect everything together. What would be the right way to do this?

1 Like

Nothing so complicated! Have a look at the Renderer class:

1 Like

Yes! Thank you. That looks to be just what is needed.

As always, Jules, thank you. Thank you for JUCE. Thank you for Tracktion Engine.

After working only a few months, in my spare time, by utilizing Tracktion Engine, I can record and play back audio and midi, do some clip editing, and am needing the render capability. I had started the project a year ago using JUCE only, and did not get nearly this far in the same amount of time. Tracktion Engine is a lot to learn, but has enabled great progress.!

Thank you!

Thanks @jules and @bwall!! You really saved my time!!

Question.
Can rendering be multi-threaded?If it can, how?
I believe there must be some ways to multi-thread rendering, because Tracktion7 can :grinning:

Log.
I’m a noob. And I started learning Tracktion Engine few days ago.
So please note that the following code may not be good. What I intend here is, with documents about where I got stack, I hope it will be helpful to make good documents, comments, or tutorials. It’s not a blame, but it’s a comedy titled “Common things how noobs take mistakes” :smile:
Hope you enjoy and it woud be any help.

What I wrote was…
class member attributes:

te::Engine engine{ ProjectInfo::projectName };
te::Edit edit{ engine, te::createEmptyEdit(), te::Edit::forEditing, nullptr, 0 };
TextButton  renderButton{ "Render" };

at constructor

renderButton.onClick = [this]
{
    FileChooser chooser{ "enter file name to render..."
        , engine.getPropertyStorage().getDefaultLoadSaveDirectory("MyDir")
        , engine.getAudioFileFormatManager().readFormatManager.getWildcardForAllFormats()};
    if (chooser.browseForFileToSave(true))
    {
        File file = chooser.getResult();
        BigInteger tracksToDo;
        int trackID = 0;
        for (const auto& t : te::getAllTracks(edit))
            tracksToDo.setBit(trackID++);

        te::EditTimeRange edrange{ 0.0, edit.getLength() };

        if (te::Renderer::renderToFile("My Render Task", file, edit, edrange, tracksToDo, true, {}, false))
            DBG("render successed!!");
        else
            DBG("render failed...");
    }
};
  • What I first got stack was tracksToDo. In my project, it had only one track. So I wrote tracksToDo.setBit(0); to get result “render failed…” and nothing happened. It took a little time to notice that there had been other tracks. I checked how many tracks are there and wrotetracksToDo.setBit(0); ...(1); ...(2); ...(3);...(4);...(5). Yeah! It works! But of course, not good. There must be better way… yes, I got getAllTracks(). I don’t know set all tracks’ bit on and pass it to the Renderer.
  • I got good tracks. But I took EditTimeRange’s last argument as number of samples(!!), later I found it is second. Then… say no more about what happened but this line returns buffer overflowed integer.
    GithubLink
  • I got good tracks, I got good numSamples. Last problem was not so difficult, turn off the useThread argument. Visual Studio made break point at GithubLink. OK, I don’t use it. Let’s turn off the switch. It’s simple and easy, except I don’t know how to add UIBehaviour.

That’s all. I’ll keep practicing!
Best regards.

To your original question, rendering will be multi-threaded by default.

Or are you asking how to run the render on a background thread? If so, that exact question was asked and solved here: How to render an Edit to a new file audio file?

Sorry for my unclear question. And thanks for your reply.
It really helps me! But let me explain a little more detail.

What I mentioned “multi-thread” was, using as much resource as my PC can serve for rendering a file. In other words, using CPU (almost) 100% for exporting an audio file.

Let’s see how my program now behaves.
I coded as I mentioned before, and the result was…
WS000071
My CPU is Core i5-3320M, which has 2 core and 4 threads.
25% CPU usage means it only used single thread for exporting an audio file.
Results had not changed whether I set Renderer::renderToFile's last argument useThread as false or true(I added TestUIBehaviour class to the Engine as you mentioned).

I also estimated the CPU usage of Tracktion7’s exporting. When I hit “Export”->“Render to a file”, then my CPU usage was around 70%~80%(from Windows Task Manager). Yes, it works as I wanted, and It uses multi thread.
WS000072
I assume my PC’s hardware is not the problem.
Edit:: I rendered sample project “2BITs” on Tracktion7.

Sorry I might be wrong.
“My program” has only one clip, one track, and no plugins.
And “2Bits” has many clips, many tracks, and many plugins on Tracktion7.
On Tracktion7, the reason why it used many thread was, if I get it correctly, each thread rendered each track.
I tried to let Tracktion7 render one clip, one track, with no plugins. CPU usage is almost 25%(which is almost single-threaded in my PC)

“multi-thread” I mentioned at last post was e.g. looking-ahead rendering(I don’t know exact term for describing it). I don’t know good implementation, but let’s think dividing time range by the number of thread, and render each divided one by the threads. It’s multi-threaded, and faster than single-thread, but is useless when there are plugins which has feedback mechanism. I think it’s really difficult or can’t be accomplished for real-time processing to looking-ahead rendering properly.(If there are some ways, let me know! Leaving aside whether I can tell it or not :joy:)
So… never mind. :smiley:

Edit:: I tried to render multi clips and multi tracks on Tracktion Engine by modifying “PluginDemo” a little. It worked fine! Yes, it is multi-threaded.
WS000075
WS000076

Just for clarity, to render the entire edit, the renderToFile code simplifies down to this;

File renderFile{ File::getSpecialLocation(File::userDesktopDirectory).getNonexistentChildFile("render", ".wav") };

te::EditTimeRange range{ 0.0, edit.getLength() };

juce::BigInteger tracksToDo{ 0 };

for (auto i = 0; i< te::getAllTracks(edit).size(); i++)
	tracksToDo.setBit(i);

te::Renderer::renderToFile("Render", renderFile, edit, range, tracksToDo);

And, as simple as that is, I still think renderToFile could use another overload that just takes a File and an Edit, since the “edit” already carries all the information needed inside renderToFile.

I guess it’s because for all of our use cases, rendering an entire Edit to a file without specifying any options never actually happens.

Having an overload like that might be a good idea but we’d still need the useThread flag I think.
It’s hard with an API to determine where to put the threading onus. If we do it internally we need to pass these flags around and make sure the caller has implemented certain things to run it, if we just ignore the fact that this is possible and leave it up to the caller to run it on a thread then it makes the API harder to use.

I’m wondering if there’s a better approach by perhaps allowing some empty parameters to mean “all” (e.g. empty EditTimeRange or Tracks array) but that might be more confusing.

Another alternative would be to have a struct which determines what to render and provide a constructor that simply takes an Edit. But then we’re getting close to Renderer::Parameters struct which is much lower level.

I’ll have a think.

Yes, you are having to accommodate a more general application of the render function. And the existing renderToFile does work.

For my needs, however, I find that that the render code seems very complicated when all that needs to be done, is to capture the output to a file.

And this brings us around to my original request which is how to accomplish capturing the output to a file? I find that 99% of the time, I just need to record the output of the mix. So, a function that hooks up to the last node, presumably the output of the master volume plugin, and sends it to a file would do the job nicely. Can you point me in the right direction to hook this up?

I realize this will render in real time, which is fine. And, I can always revert to renderToFile for more complicated use cases.

Thank you.

(Sorry, If I interrupt the discussiton.
It’s not about renderToFile overload.)

Thanks @bwall.
Just from my curiosity and for my clarification, let me understand deeper about these two lines.

for (auto i = 0; i< te::getAllTracks(edit).size(); i++)
	tracksToDo.setBit(i);

I also coded like that, but I had a little question.
Putting the details aside for now and going through the points in broad strokes, should I set all the tracks’ bit “on” if I want to render as playback did(or, capturing the main output to a file, as @bwall said), right?
If there are any cautions or advices, please tell me. @dave96

And a really little question is, my project has a ArrangerTrack and a MarkersTrack (they are by default, I think). I’m wondering how they work when rendering if I set these tracks’ rendering flags “on”? For metadata?

@bwall, I’m not completely sure about your use case but I doubt recording the output of playback is really what you want. There are subtle differences between a live playback graph and rendering (one example is that you don’t have live inputs in the graph when rendering).
If you really do want to hook in to the output of the graph you can use EditPlaybackContext::insertOptionalLastStageNode to create a new AudioNode at the end of the graph.

This is certainly not simpler than rendering using the renderToFile method though.


@DicklessGreat, yes you want to set all bits if you want all tracks included in the render.

Technically the ArrangerTrack, MarkersTrack, TempoTrack etc. don’t need to be included as they don’t contribute to the playback graph, but there’s not harm in adding them.

Thanks!

I appreciate your patience. You have a deep internal understanding of the Tracktion Engine, and its workings. I do not.

I must say, I am particularly puzzled by your statement, “but I doubt recording the output of playback is really what you want”. Why would that be so? It seems to me that recording the output of playback is exactly what I want. A render that gives me anything else would not be usable, since it would sound different. What am I missing?

Or perhaps we are talking about different things here?

I have things functional with renderToFile, so I do have rendering capability. Capturing the output to a file is now more of a curiosity.

Thank you.
.

Well for a start you’d only be able to do that in real-time. What if you have a session that’s an hour long? What if you want to freeze/bounce a single track?
Then there’s also the live inputs issue I mentioned previously.

I guess the main question I have is why wouldn’t you want to use an offline-render? What benefits does recording the live output have?

Yes, I see your point. But, in my case, I am recording songs that are three to five minutes long. So, real time rendering is not a problem. Even Pro Tools does real time rendering, so this is not an unusual use case.

And wouldn’t it be possible to freeze/bounce a single track by soloing it?

And real time rendering is what-you-hear-is-what-you-get. Which is what I want.

BTW, since Pro Tools offers real time rendering, perhaps that might be a desirable addition to Tracktion Engine, and by extension, to Waveform?

See Renderer::Parameters::realTimeRender.

If I remember correctly Pro Tools started off with real-time rendering because that was the only way to bounce through consoles. After that, some plugins couldn’t deal with faster-than-real-time rendering so they didn’t both with offline rendering.

However, about 3 or 4 years ago they added that as an option haven’t they? I seem to remember videos of Pro Tools users being amazed this was possible… (Tracktion had been doing it for about 15 years).

Dave’s point is that there are lots of subtle reasons why we spent years building a massive ton of code to handle rendering in a way that’s separate from the playback stuff.

Sure, when you start writing a DAW you naively think “ah, rendering, that’s just a case of capturing the output…” and then you start hitting all the edge-cases and reasons why it’s NOT that simple. I’ve forgotten most of the gotchas, and TBH one of the things that makes the engine valuable is that it means you don’t have to know about them to make a functional app.

But yeah, we could certainly have a function renderCompleteEdit that just does all the tracks, and the whole length, so you can skip having to pass it those arguments.

Yeah, I’m totally on board with adding a renderCompleteEdit method, I’m just tied up on some other branches atm so haven’t done it.

Yes, Pro Tools can do offline rendering. I prefer to use the real time rendering because I like hearing what is being rendered. I cannot say how many mixers use Pro Tools that way, but I suspect that many do it that way out of habit, same as me. It is a personal preference thing.

And you are right, Jules, I do not know all the edge cases. I have much to learn. You and Dave have given us a fantastic tool. And with this tool I have nearly a usable DAW in only a few months in my spare time.

I had begun my DAW project a year ago using only JUCE, and quickly got mired down in all the details you have solved with Tracktion Engine. And it is very much fun to get things working! That is why we all do this kind of programming, for the challenges and the joy!

Thank you!