No sound from the engine

Hi everyone I hope you’re having a great weekend.

My product has evolved and I’ve just added the tracktion engine into it. I’m creating and/or opening a .tracktionedit file. But when I play the file, I only hear the click track. All the other tracks are silent… To be sure when I load the .tracktionedit file into Waveform 13, I can see and hear the tracks.

In anyone’s greatly esteemed opinion, where would be the first place I’d need to check ?

Best regards.

The Click Track is connected directly to the output, and so is more or less at the end of the graph. Your tracks would be back up stream from there. Make sure to check the status of your track Mutes and Solos.

Thank you for your comments.
All tracks are unmuted.
Is there something I need to do to get the tracks to be connected to the master track ?
You mentioned the graph. Is that done automatically ?

Jacques

Yes, tracktion engine builds the graph for you. Is this a DAW you’ve built?

Without more details it is hard to say what may be going on. Hopefully, one of the devs will respond here.

Haha ! No not a DAW.
What the world really needs now is another DAW, written by a noob like me…

Yes I fear we’d have to dive deep into the code to find out what’s wrong.
It’s probably something silly…

Jacques

I really don’t get it.
I’m in a GUI app, not a plugin.
I generate and save the edit with my audio files, etc.
I load the file in a ::forRendering edit, and I call for a render and the resulting wav file has all the sound in it.
I then try to load the same file with a ::forEditing type edit, play it, and I can’t hear a sound.

I’m missing something simple, I’m sure…
J

What audio device are you using? How many channels does it have?

There’s a chance the tracks are being played back on the “default audio output” which could have been set to something other than 1-2. But if the click track is being played back, that’s probably not it.

We try to set all the defaults so it’s as simple as possible, but without knowing all the code and the Edit it’s difficult to know what could be wrong. It’s a DAW engine so there’s literally thousands of ways things can be configured :grin: You could try checking the TrackDestination of each track though.

Can you maybe share the Edit file? Or at least a minimal example of one that doesn’t work?

My app was a pure Component, and I just now tried to make it an AudioAppComponent.
Now I have a prepareToPlay and getNextAudioBlock functions…

But still no joy.

I verified that the audio card (UA apollo x6) is being used by listing the device, and hearing the click in it.

What should the TrackDestination be like ?

2024 04 07 Sun 15 51 27.zip (4.9 MB)

Here’s a zip file with a small complete project.
Works in render mode, and works in Waveform 13 (imported).

Thank you so much Dave for looking at it.

So if I’m going

    for (auto t : getAudioTracks (*engine.edit_project))
    {

where do I get the trackDestination ?
J

PS: No okay I got it.

Sorry but there doesn’t seem to be a destination track…

This code:

                for (auto t : getAudioTracks (*engine->edit_project))
                {
                    if (t->getOutput().getDestinationTrack()) {
                        DBG("track destination ");
                    }
                    else{
                        DBG("NO track destination ");
                    }
                }

produces the following output:

NO track destination 
NO track destination 
NO track destination 
NO track destination 
NO track destination 
NO track destination

sigh…

I’ll have to check more closely tomorrow but I think that’s correct. The destination isn’t another track, it will be the name of an OutputDevice (probably a WaveOutputDevice)

This is the bit specifying the output device for each track:

<OUTPUTDEVICES>
      <DEVICE name="(default audio output)"/>
</OUTPUTDEVICES>

Which looks correct.


Which makes me think it’s an issue connecting to the output (it could have been an audio path problem but if it’s rendering ok, then I doubt it’s that).

You don’t want your app to be an AudioAppComponent, just a normal Component as the Engine’s DeviceManager will connect itself to the juce::AudioDeviceManager it contains. Put a breakpoint in DeviceManager::audioDeviceIOCallbackWithContext and see if it’s getting called.

If it is, try putting one in DeviceManager::audioDeviceAboutToStart and see what device is actually starting and with how many channels etc.


One final thing, I noticed the paths in the edit file are absolute, how are you setting those?

Dave,

I would love the file paths to be relative, not absolute.
Where could I find a working example of the resolver and the retriever ?

what I’ve got here is not working:

    auto filePathResolver = [this, project_path, project_name](const String &path) {
        auto p = File(project_path).getChildFile(project_name);
        return p;
    };

    te::Edit::Options options{engine,
                              te::createEmptyEdit(engine),
                              p_id,

                              (for_render ? te::Edit::forRendering : te::Edit::forEditing),
                              nullptr,
                              te::Edit::getDefaultNumUndoLevels(),

                              [editFile, project_name] { return editFile.getChildFile(project_name); },
                              filePathResolver};

I think you don’t want to set a custom filePathResolver, just use the default which will resolve files if you have set an editFileRetriever.

But I think your editFileRetriever is wrong. That should just return the edit file, not editFile.getChildFile(project_name);, that doesn’t seem to make sense to me.

You probably just want:

    te::Edit::Options options{engine,
                              te::createEmptyEdit(engine),
                              p_id,
                              (for_render ? te::Edit::forRendering : te::Edit::forEditing),
                              nullptr,
                              te::Edit::getDefaultNumUndoLevels(),
                              [editFile] { return editFile; } };

ok I will do what you suggest.
my question is you said:

I noticed the paths in the edit file are absolute, how are you setting those?

So I’m wondering how can I make those relative in the edit ?
I add the files in a clipholder like this:

    File file(file_var["localFilePath"].toString());

    te::AudioFile af(engine, file);

    String waveclip_name = region["name"].toString();

    tracktion::ClipPosition clipPosition;

    tracktion::TimePosition start = tracktion::TimePosition::fromSeconds(0.0);
    tracktion::TimePosition end = tracktion::TimePosition::fromSeconds(static_cast<long long>(af.getLength()));

    tracktion::TimeRange timeRange(start, end);

/* and so on... */

So… still no sound coming out of the edit…
I’m getting a little desperate here… lol
Anything else I should check ?

A stupid question but setVolumeDB of a track should be 0.0 for normal output ?

Here’s my code to add a track

    te::AudioTrack::Ptr new_audio_track = edit_project->insertNewAudioTrack(
        te::TrackInsertPoint(nullptr, te::getAllTracks(*edit_project).getLast()), nullptr);

    new_audio_track->setName(track["name"].toString());
    new_audio_track->setTags({track["id"].toString()});

    new_audio_track->getVolumePlugin()->setVolumeDb(0.0f /*track["volume"].toString().getFloatValue()*/);
    new_audio_track->getVolumePlugin()->setPan(0.0f);

    new_audio_track->setMute(false);
    new_audio_track->getVolumePlugin()->setEnabled(true);
    new_audio_track->setProcessing(true);

    for (int i = 0; i < track["regions"].size(); i++)
    {
        std::cout << ("track type on loadtracks " + track["type"].toString()) + "\n";
        if (track["type"].toString() == "audio")
        {
            addRegion(new_audio_track, track["regions"][i], project);
        }
        else if (track["type"].toString() == "video")
        {
            addVideo(new_audio_track, track["regions"][i], project);
        }
        else
        {
            addMidiRegion(new_audio_track, track, track["regions"][i], i, project);
        }
    }

and the code to add the audio region;

    std::cout << ("entered region with file_id " + region["file_id"].toString() + "\n");

    var file_var = getFileFromID(int(region["file_id"]), project);
    if (!file_var.isObject())
    {
        return;
    }

    std::cout << "got file name " + file_var["name"].toString() + "\n";

    File file(file_var["localFilePath"].toString());

    te::AudioFile af(engine, file);

    String waveclip_name = region["name"].toString();

    tracktion::ClipPosition clipPosition;

    tracktion::TimePosition start = tracktion::TimePosition::fromSeconds(0.0);
    tracktion::TimePosition end = tracktion::TimePosition::fromSeconds(static_cast<long long>(af.getLength()));

    tracktion::TimeRange timeRange(start, end);

    tracktion::TimeDuration offset;

    clipPosition.time = timeRange;
    clipPosition.offset = offset;

    tracktion_engine::WaveAudioClip::Ptr clipholder =
        track->insertWaveClip(waveclip_name, af.getFile(), clipPosition, false);

    tracktion::TimePosition starttime = tracktion::TimePosition::fromSeconds(0.0);

    tracktion::TimePosition::fromSeconds(0.0 /*region["start_time"].toString().getDoubleValue()*/);
    tracktion::TimeDuration startoffset(
        std::chrono::duration<double>(0.0 /*region["start_truncation_duration"].toString().getDoubleValue()*/));
    tracktion::TimeDuration length(
        std::chrono::duration<double>(af.getLength() /*region["duration"].toString().getDoubleValue()*/));
    tracktion::TimeDuration truncatedduration(
        std::chrono::duration<double>(0.0
                                      /*region["truncated_duration"].toString().getDoubleValue()*/));

    if (truncatedduration.inSeconds() != 0.0f)
    {
        length = truncatedduration;
    }
    else
    {
        length = length - startoffset;
    }

    clipholder->setStart(starttime, false, true);
    clipholder->setOffset(startoffset);
    clipholder->setLength(length, true);
    //
    //    tracktion::TimeDuration envelopeAttack(std::chrono::duration<double>(
    //            region["enevelope_attack"].toString().getDoubleValue()));
    //    tracktion::TimeDuration envelopeRelease(std::chrono::duration<double>(
    //            region["enevelope_release"].toString().getDoubleValue()));
    //
    //    clipholder->setFadeIn(envelopeAttack);
    //    clipholder->setFadeOut(envelopeRelease);
    //    float voldb = Decibels::gainToDecibels(
    //            region["envelope_sustain"].toString().getFloatValue());
    //    std::cout << "Region sustain in decibels: " + String(voldb) + "\n";
    clipholder->Clip::setMuted(false);
    clipholder->setGainDB(0.0f /*voldb*/);

    //    clipholder->timerCallback();
    num_regions_done++;

This seems to be working nicely.

Forgot to answer

Yes the DeviceManager::audioDeviceIOCallbackWithContext is called correctly

and yes the DeviceManager::audioDeviceAboutToStart show the right audio device being used.

Sorry for the delay.

I can’t see anything obvious in that code but you’re doing a lot there and setting a lot of unnecessary properties.

    new_audio_track->getVolumePlugin()->setVolumeDb(0.0f /*track["volume"].toString().getFloatValue()*/);
    new_audio_track->getVolumePlugin()->setPan(0.0f);

    new_audio_track->setMute(false);
    new_audio_track->getVolumePlugin()->setEnabled(true);
    new_audio_track->setProcessing(true);

All of this is just setting default values.

I would just start loading a single audio file and see if that plays first then build up that example to see what step breaks it.