InsertClipWithState and Clip Lengths for Wav Files with Embedded Tempo Information

When I’m adding new clips to my track, what’s the way to tell Tracktion Engine to either ignore or use embedded tempo information for wav files?
What I’m hoping to learn is how to use clip tracks and audio clips together to get aligned embedded tempo data/playback/and thumbnails. I’m testing this on Tracktion Engine v2.1.0.

Here’s what’s happening and what I understand so far:

When importing the clip into Reaper you can use or ignore the tempo information. The tempo in the file’s header says it’s at 140 bmp, and so you can see that works out. The real length is 13.7___ seconds and the tempo adjusted length is 16 seconds.

Now with my Tracktion Engine when I drop the clip onto the timeline I get a preview that’s 16 seconds long, but with 13.7___ seconds worth of thumbnail and playback.

I see that the value for the clip’s length is being changed in tracktion_ClipTrack.cpp here: auto newLength = ts.toTime (endBeat) - ts.toTime (startBeat);. If I change my LoopInfo so that the the oneShot property is true the issue resolved. This is a hack of course, one shots should be able to use embedded tempo information, but I don’t know how to accomplish this the “real way”.

Here’s the file I’m working with.
EmbeededTempoInfoClip.wav.txt (5.0 MB)

Additionally, I see that there’s lots of changes to tracktion_ClipTrack.cpp in v3.0, and that I’m in the process of being my version up from v2.1.0. Is there anything I should keep in mind as I work through this issue and upgrading Tracktion Engine at the same time?

Firstly, out of the box TE will only read tempo info meta data from the files. So they need the acid chunk for the number of beats/tempo.
Have a look in LoopInfo::init to see if that’s being read from your files.

Secondly, where this is used is determined by the “auto-tempo” property.
So call AudioClipBase::setAutoTempo (false) to disable it.
It will be enabled by default if the source file has tempo/beat information in it.

That should cover the playback aspect.

Finally, the thumbnail will be generated without any knowledge of the above so you have to scale it manually. Its a bit more complicated than just getting the clip tempo and the Edit tempo though as the Edit tempo can change.
So there’s a helper class called AudioSegmentList you can use which will already have cached this in to segments when tempo changes happen (including chunks for ramps etc.).

Here’s how we do it in Waveform to give you a rough idea:

    else if (c.getAutoTempo())
    {
        // Draw time-stretched waveform
        if (const auto sampleRate = thumb.file.getSampleRate();
            sampleRate > 0.0)
        {
            TimePosition end;

            // Iterate the segments, drawing each in turn
            for (auto& seg : c.getAudioSegmentList().getSegments())
            {
                end = seg.start + seg.length;
                const int x1 = editTimeToX (seg.start);
                const int x2 = editTimeToX (end);

                if (x2 < left || x1 > right)
                    continue;

                if (x1 == x2)
                    continue;

                const auto segmentTimeRange = tc::timeRangeFromSamples (seg.getSampleRange(), sampleRate);

                const auto visiblePixelRange (juce::Range<int> (left, right).getIntersectionWith ({ x1, x2 }));
                const auto startProportion = (visiblePixelRange.getStart() - x1) / double (x2 - x1);
                const auto endProportion = (visiblePixelRange.getEnd() - x1) / double (x2 - x1);
                const auto visibleTimeRange = tc::TimeRange (segmentTimeRange.getStart() + segmentTimeRange.getLength() * startProportion,
                                                             segmentTimeRange.getStart() + segmentTimeRange.getLength() * endProportion);

                if (c.getWarpTime())
                {
                  /// Handle warp time section
                }
                else
                {
                    AudioStripBaseHelpers::drawChannels (g, thumb,
                                                         { visiblePixelRange.getStart() + xOffset, y, visiblePixelRange.getLength(), h },
                                                         visibleTimeRange,
                                                         c.isLeftChannelActive(), c.isRightChannelActive(),
                                                         gainL, gainR);
                }
            }
        }

The main thing that changed with v3 was support for ContainerClips. These are clips that can contain other clips so ClipTrack is now a ClipOwner as is ContainerClip. So there’s just an extra level of abstraction and the API has been cleaned up a bit. It should work mostly the same way though.

1 Like

Thanks as always for your quick and thorough replies.