Render stems always fails

Hi,

I’m implementing stem export for a tracktion edit. I set the separateTracks property. Since my edit is not part of a project, I set the category to none. For the filename I pass in an unexisting wav file path. It seems that the render uses this path to choose a directory to write the stems to although the use of the file path in the context of a stem export is not documented.

For some reason, the jobFinished argument completedOk is always false even though the render seems fine.

auto parameters = renderOptions.getRenderParameters(edit);
parameters.destFile = "Export Directory/Export.wav");
parameters.category = tracktion::ProjectItem::Category::none;
parameters.separateTracks = true;
job = tracktion::EditRenderJob::getOrCreateRenderJob(edit.engine, parameters, false, false, false);

Should I do something different or is this a tracktion bug?

Thanks,
Jelle

Are you able to dig through and see why it’s not set as completedOk?

Are the individual task progresses not reaching 1.0f for some reason? Can you print them out on line 183 of EditRenderJob::RenderPass::~RenderPass() and see what task->getCurrentTaskProgress() is? I’m wondering if it’s a compare to float issue.

The line you’re referring to is this one:
const bool completedOk = task != nullptr ? task->getCurrentTaskProgress() == 1.0f : false;

But, completedOk is assigned to true here. So this is fine. Something else is causing the callback to receive false.

I’m wondering if the issue is related to the code below:

bool EditRenderJob::completeRender()
{
    CRASH_TRACER

    if (result.items.size() > 0
         || (params.category == ProjectItem::Category::none && proxy.getFile().existsAsFile()))
        result.result = juce::Result::ok();

    return result.result.wasOk();
}

I found during debugging that result.items is empty and the proxy file contains the wav file that I pass in which doesn’t exist. Am I assigning parameters.destFile to a correct path?

@dave96
Did you already have a chance to look at this issue?

Unfortunately not, I’m deep in some time-stretching stuff at the moment which I need to get finished.

If you want to speed things up, filing a GitHub issue with a corresponding failing UnitTest would be a huge help (I detailed this here).

Providing failing tests is the best way to be very explicit about problems and automatically gives an acceptance criteria. So that would be a huge help.