Possible Develop Branch Issues

When I build against the latest Develop branch, my DAW will freeze for several seconds during normal playback, and then continues. It does this at random places in the track, and two or three times per edit.

Also, when I render, the resulting output file seems to be reduced in volume by 6db (estimate).

If I roll back about a month, both issues go away.

Any suggestions?

The main change on the develop branch is that I’ve removed the old engine. So lots has changed under the hood.

When you say it “will freeze” is this the message thread? The audio thread?
If you change the number of audio cores to use (EngineBehaviour::getNumberOfCPUsToUseForAudio()) to 1 (which will disable multi-threaded audio rendering) does the problem go away?

If not, you might need to give me a bit more info to go on as I’ve not had this reported by any of our beta testers.

Regarding the volume drop, are you sure this isn’t expected? If you add a standard ±1.0 sin audio clip and render that, what is the resulting output? Make sure that both the track and master volume levels are at 0dB gain before rendering.

After more testing last night, the freezing may have been something to do with a third party plugin. So, I will wait to see if I have any further freezing.

The volume drop is comparing a render from tracktion_engine develop of a month ago with the most recent develop version of tracktion_engine. The edit in both cases is the same.

My code actually checks to make sure that the master volume is at 0db before rendering. I have an AlertWindow that gives the option to abort the render if the master volume is not 0db.

I have been using the new engine for quite some time, and it is only recently that I have seen the render issue.

I’ve just run some of my own tests rendering normalised sin files in Waveform and the output is at the expected level. There’s also unit tests in the code such as the PDCTests which renders sin files and check the amplitude.

Perhaps you can write a failing unit test that pinpoints the problem?

Well, it is definitely something I am doing (or not doing). I do not use the master track volume and pan plugins per se’. All I do is to have a simple slider used as below.

volumeSlider.onValueChange = [this]
	volumeParameterPtr->setParameter(float(volumeSlider.getValueDbToGain()), dontSendNotification);

This controls the volume of playback, but it seems to now have no effect on the render. Even if I pull the slider all the way down, the render still happens at the same ( roughly -6db) volume.

I am sure this worked before. But with the new plumbing, I am missing a step.

99% of the time I use the volumeSlider above to control playback volume only. I do track fades in mastering. So, when I render, I want the mix output to be at 0db.

What have I missed?

I think you might have to step the signal chain a bit to see where the gain is being applied.
I’d probably start with the WaveNode to check the samples are being produced at ±1.0 and there is no clip gain applied.

Then put a breakpoint in VolumeAndPanPlugin::applyToBuffer to check that there is no gain being applied here. If it’s actually -6dB I’m wondering if the Pan Law has been changed to be an equal power?

If I create a new edit, the master fader defaults to -3.9db. The track faders default to -2.6db. I do not set them to these values. I want them to be at 0db. Are these values a result of the Pan Law?

And what do we do about the render volume not being affected by the volumeSlider? I know the new internals are different because my volumeSlider used to control the volume of render also, and not just playback. So, how do things work now?

I just did a few experiments. No matter what PanLaw I set, the master fader defaults to -3.9db, and the tracks default to -2.6db. I do not set these values. So, where are they set? I am studying the tracktion_engine source code to see if I can find where these values come from, but I am hoping you know?

Thank you.

OK, so the initial values are coming from getSliderPos(), for example.;


And the initial value comes from an AutomatableParameter as seen in tracktion_VolumeAndPan.h;

float getSliderPos() const { return volParam->getCurrentValue(); }

So, now I need to figure out why the initial values are -3.9db for the master track, and -2.6db for regular tracks.

And, unfortunately, none of this explains why the master volumeSlider has no effect on the render volume. See, again my code below;

volumeSlider.onValueChange = [this]
	volumeParameterPtr->setParameter(float(volumeSlider.getValueDbToGain()), dontSendNotification);

My code in these areas has not changed in several months. My renders used to render according to the level of the master fader. Now they do not. Now, the master fader has no effect on the render. Surely there is a new requirement for controlling the volume properly that I do not know about?

I like succinct minimalist code. So, I may have not included code that is needed now. I suppose my real question is, what is the correct implementation?

After some more experimentation, I found out that during render, the master track plugins are not processed! So, it is not just the master fader that is not included during render, but the master track plugins are not included either!

This happened in the past month, so I have missed something critical in transitioning to the new develop branch version of everything.

Again, as a reminder, the master track plugins and master fader are processed for normal playback. It is only when performing a render that they are bypassed somehow.

And important to mention that my code for the processing has not changed other than to incorporate the new PlayHead class.

Where do I even look to fix this?

The master volume plugin gets initialised to -3dB in Edit::initialiseMasterVolume. It’s in there for legacy reasons but I might make it user configurable and default to 0dB.

Track volumes should default to 0dB (line 103 of tracktion_VolumeAndPanPlugin.cpp.

In Renderer::Parameters there’s a flag, useMasterPlugins which defaults to false.
I don’t know how you’re configuring your render but maybe that’s the problem?

You can put a breakpoint in createNodeForEdit and check the CreateNodeParams::includeMasterPlugins flag to see if the graph gets created with the master plugins?

Thank you! I will check this today.

useMasterPlugins must have defaulted true under the old engine because I have not changed any of my code (see below).

File renderPath{ te::EditFileOperations(*edit).getEditFile().getParentDirectory() };
File renderFile{ renderPath.getChildFile("Render").getNonexistentChildFile("render", ".wav") };

te::EditTimeRange range{ 0.0, (edit->getLength() + 1.0) };// add 1.0 second to render tail

juce::BigInteger tracksToDo{ 0 };

for (auto i = 0; i < te::getAllTracks(*edit).size(); i++)

if (te::Renderer::renderToFile("Render", renderFile, *edit, range, tracksToDo, true, {}, true))
	AlertWindow::showMessageBoxAsync(MessageBoxIconType::InfoIcon, "Rendered", renderFile.getFullPathName());
	AlertWindow::showMessageBoxAsync(MessageBoxIconType::WarningIcon, "Render", "Failed!");

I discovered that the slider initialized value issue comes from the fact that the units are in sliderPos which is not a 0 to 1 range. DAWs and DSP use values of 0 to 1 for everything, so it is quite unexpected to find a different range. I am now using the volumeDb values instead.

I think what’s happened is that it just wasn’t used correctly before and now it is.
The previous code was ambiguous about what to do with master plugins and took the general “use plugins” to mean all plugins. It was only in this static method though which is one of the reasons I don’t really like to provide lots of helper methods, there’s just so many options when rendering.

I’ve restored the old behaviour with this commit though: Renderer: Ensured master plugins are also enabled with Renderer::re… · Tracktion/tracktion_engine@df1dca4 · GitHub

Thank you! That indeed fixes the issue of applying master plugins to the render.

Perhaps you could add an additional parameter to the end of renderToFile(..., useMasterPlugins = false). Then users can enable them optionally, but they default to false.

We still have an issue with the rendered volume. Compared to playback, the rendered volume is down 6db. Under the old engine, rendered level and playback level were the same.

And you’re sure none of your volume plugins are attenuating the level? I know sliderPos is a terrible unit, it’s baked it the Edit model now I’m afraid though and I don’t think we can change it easily.

Have you put a breakpoint in the vol/pan plugins to see if any of those are applying a non-zero gain?

I’ve just tested our “Quick-render” tool in Waveform which uses that method and there doesn’t seem to be any gain applied.

I am realizing that I have been saying it wrong. It is not the “old engine” it is the engine before the new graph was rolled in. I have been using the new graph all along with the flags set.

So, “old engine” = a month ago. “New engine” = now.

"old engine " versus “new engine” shows a 6db drop in level when rendering as compared to playback.

That static renderToFile method actually used the old engine internally though (the non-tracktion_graph engine). Again it’s one of the reasons I needed to get rid of all instances of the old engine because it flagged up things like this.

So, it could be it was just a problem with the old engine that has now been fixed.
So really, if you’re rendering now, are the levels what you would expect them to be or not?

With the latest develop, the render is 6db down compared to normal playback. And this is despite the fact that the master track metering shows the level coming from the mix to be the same!?

So, somehow, after the master fader, which is set at 0db, the level is reduced by 6db.

Ok, but I really can’t debug something as complex as a DAW by guessing. There’s literally millions of variables and permutations.

That’s why I’ve asked for a test (I mean some literal code in a juce::UnitTest subclass I can run).
If you take a look at the PDCTests, you’ll see the kind of thing I need. This actually creates a sin file and renders it. You could pause the test and dump the edit to a file, taking a copy of the sin and test whether this plays back differently to the render?

As I’ve said, I’ve not been able to replicate this so will need a few more details as to how exactly to replicate this, preferably with a test, it’s not a problem I’m seeing in Waveform that I can tell.

I appreciate all your help. I think we are very close to having this sorted.

I will see what I can come up with.