Continuous audio data in a divided project

Sometimes a film project is not continuous in timeline.
It’s divided into 5~6 sections, each with an IN point and an OUT point, meaning where a section starts and ends. The length of each section is not necessarily the same, and the space between last OUT and next IN is blank.

But JUCE audio plugins are based on blocks, 1024 for example, meaning the ends of audio and video are hardly the same, because the length of audio data in samples must be multiple of 1024…So If we include all video frames, the corresponding audio data have to be a bit longer, including some invalid data.

I’m doing some encoding work, so I want all valid audio data to be continuous.
Encoding is based on blocks and need valid data only, but the data in DAW go strictly with timeline, regardless of whether a sections ends or whether there’s a blank data.

For example, Section 1 ends at Sample 480,000(not multiple of 1024) and Section 2 starts at Sample 1,024,010(not multiple of 1024)

  1. The plugin will get 479,232~480,256(both multiple of 1024), and the last 256 samples are invalid…
  2. The plugin will continue getting data between Section 1 and Section 2, but all invalid.
  3. The audio data is not valid until the frame including the beginning of Section 2. This time the plugin get 1,024,000~1,025,024. The first 10 samples are invalid.
    So the neighboring 2 audio frames will be [479,232~480,256][1,024,000~1,025,024], but actually I want all the data to be valid, namely [479,232~480,000 + 1,024,010~1,024,266](1024 total).

So is there a way to make audio data continuous when the project is divided? e.g. Can a JUCE plugin know where a section starts and ends(like getting the position of IN and OUT)?

The position info for a plugin is found in the AudioPlayHead. I am not aware of a DAW (but sure, I don’t have used them all, so they might exist), that has a notion of reels (I think that’s what you are talking about).
That’s why post production still work in reels, so that they don’t have to load 212 mins Ben Hur projects.
Since each reel-project has now it’s own scope, there is no problem starting to count from zero to the crossfade mark in the last picture for each reel.

Does that solve your problem?

A mildly tedious workaround would be to make sure you’re using a sample rate that is an integer multiple of your frame rate (for example, if you’re at 24fps, a 48kHz sample rate audio file will have 2k samples per frame). Then you can keep track of the sample points of your in/out points regardless of buffer size, which is really only for playback more than anything.

Even the whole film is divided into reel-based projects, we can’t guarantee the time in samples in each reel is the multiple of 1024, while the audio must be the multiple of 1024(the granularity of encoding is 1024), so the encoding problem still exists …
If Reel 1 ends at 00:19:00:23 (54,766,000 samples total), we have to analyze and encode audio data block by block: 0~1023, 1024~2047…54,765,568~54,766,592
The plugin get 1024 samples at one time, so the last audio sample must exceed the last picture, then audio and video are not aligned…

There’s another weird thing…
I step into AudioPlayHead::getCurrentPosition and found some code getting the current sample:
if (info.isPlaying
_ || transport.GetTimelineSelectionStartPosition (&info.timeInSamples) != AAX_SUCCESS)_
_ check (transport.GetCurrentNativeSampleLocation (&info.timeInSamples));_
But when I check info.timeInSamples, the value is different each time.
If I always play from the beginning, the value is -41 for 1st time, and -983 for 2nd time, then -440 for 3rd time…the value seems random and always negative(or always smaller than the current position, within one buffer size). Theoretically it should be always 0…
Any solutions to this problem?

Which encoder are you talking about? I think they all support arbitrary numbers of samples to be fed. Even frame based encodings usually have a property in each frame, how many samples are contained.

Also your processing should be able to cope, it is emphasised many times, that every plugin has to be able to be called with smaller AudioBuffers than announced in prepareToPlay.
Alternatively when rendering out you could discard the additional samples (beware of tails from plugins :wink: )

If you put the reels together to a feature (DCP or DVD), you will probably retain the reels and start with sample 0 for each reel, don’t you?

I don’t mean to lecture you, please don’t get me wrong, but I don’t see a point in the workflow, where you actually convert from a sample position in a reel to the sample position in the whole feature, and where half frames would become a problem.

The problem with the positions sounds actually bad. I just saw once, that the meanings of the fields were not clear to me and once I even thought they were inconsistent across hosts, but I abandoned that project, so I didn’t follow that to a solution.

Good luck you get that sorted, or find help here.