Notification on finished recording

Hi,

If I start/stop recording using the TransportControl, how can I get notified when the recording is actually finished (the writing of the file is complete etc)?

I can add a ChangeListener to the transport, but the change notification does not tell me what has happened, just that something has changed. What I need to get hold of the newly recorded audio clip and/or audio file.

Thanks
Erik

Sorry for the late reply, I’ve been away for a week.

If you call TransportControl::stop, it will block until the files have been flushed to disk and the clips created. Can you use this operation?

No problem and thanks for answering!

Ah, TransportControl::stop blocks, that’s good to know! Is that true also for play and record as well?

Specifically: is it safe to do like this from the message thread:

std::string MainComponent::record()
{
    JUCE_ASSERT_MESSAGE_THREAD

    auto& transport = edit->getTransport();
    // Error checking
    if (transport.isRecording()) return json_error("alreadyRecording");
    if (transport.isPlaying()) return json_error("playing");
    
    auto audioTrack = te::getAudioTracks (edit)[0];
    auto in = getTrackInputDeviceInstance(*audioTrack, &edit); // helper
    if (!in) return json_error("noInput");

    // Start recording
    transport.record (false);
    
    // Get file path to recorded file - IS THIS RELIABLE??
    auto f = in->getRecordingFile();
    
    return "{\"result\": \"OK\", \"file\": " + quote(f.getFullPathName()) + "}";
}

Regarding the stop call: my worry is that even if stop blocks, recording may be stopped from other places in the application, including a “max recording length” timer. Therefore I need to make sure that the notification is always broadcasted, regardless of how or why playback stopped.

Also, aren’t there situations when the engine “automatically” stops playback or recording (device list changed, running out of disk space etc)?

I’m thinking of doing something like this:

void MainComponent::changeListenerCallback (ChangeBroadcaster* source)
{
    if (!edit) return;
    auto transport = edit->getTransport();
    
    if (source == &transport)
    {
        if (!isRecording && transport->isRecording()) {

            // Recording has just started

            isRecording = true;
        }
        if (isRecording && !transport->isRecording()) {

            // Recording has just stopped

            isRecording = false;
        }
    }
}

Would this approach be safe from races?

You can’t call getRecordingFile after a recording has stopped, it’s only valid during recording and even then may not be what the final file ends up as (e.g. if it’s sliced in to multiple takes).

The second method may work but as a ChangeBroadcaster coalesces multiple calls, you might miss callbacks if start/stop ops happen within the same message loop. That is very unlikely in a real application though so what’ve you’ve written probably would work.

Probably the best method however would be to listen to the track’s state to be notified when clips are added and you can get the source file from them?

You can’t call getRecordingFile after a recording has stopped, it’s only valid during recording and even then may not be what the final file ends up as (e.g. if it’s sliced in to multiple takes).

Looking at the source code (WaveInputDeviceInstance::applyLastRecordingToEdit), it seems to me that if there is only a single take, the final file will have the same name as the file being recorded – correct?

Probably the best method however would be to listen to the track’s state to be notified when clips are added and you can get the source file from them?

The problem for me with this approach is that I need to read the audio file from another app during recording (for some CPU-intense cloud-based analysis). If the analysis cannot start until the file has finished recording, that would mean a long wait for the user, so unfortunately that is not an option in my use case.

Plan B is to roll my own audio recorder (e.g. based on the JUCE AudioRecordingDemo). I tried it and it works well, but then I would have to sync that audio file to what’s being played back by the app in case of overdubs and so on – exactly the kind of complex stuff that the tracktion engine so elegantly solves for me! So if it is possible to achieve this using the tracktion engine, I would most certainly prefer that!

Ok, well that’s a different use case to what I thought you first described.
getRecordingFile is precisely for getting the file during recording. For example we use that along with the RecordingThumbnailManager to get the appropriate live thumbnail.

I think you’d still have some problems reading the file whilst it’s being actually written though wouldn’t you?

Are you sure adding an “observer” plugin that sends the data to your cloud service isn’t a more reliable approach?

Ok, well that’s a different use case to what I thought you first described.

Yeah, sorry, I was being unclear!

getRecordingFile is precisely for getting the file during recording. For example we use that along with the RecordingThumbnailManager to get the appropriate live thumbnail.

Allright! But – when recording stops, the upload/analysis may be just halfways through the file. So the question is what happens at that point! If the recording file being moved or removed, that would be a problem on Windows I suspect. (On Unix/Mac the reading process’ file handle would still be valid)

I think you’d still have some problems reading the file whilst it’s being actually written though wouldn’t you?

It has actually proven to work well for us in the past, at least on Mac and Windows (which are our target platforms).

Are you sure adding an “observer” plugin that sends the data to your cloud service isn’t a more reliable approach?

I was hoping to avoid that because that would mean a lot of code/logic duplication (as we still need to support static file upload not involving the C++ audio server at all). We used to do it like that and while it worked it quickly becomes a mess…

I think I’ve lost sight of the problem then. It seems to me what you want to do is the following:

  • Start recording
  • Use getRecordingFile to find the file being recorded to
  • Poll that for new WAV data and upload it
  • Assuming you’re using standard recording (i.e. not takes etc.) keep hold of that file path
  • When recording stops, make a note of the file length
  • Keep your upload thread going until the file length has been reached
  • If the user deletes or renames the file in this window, you’ll probably have to cancel the upload process

Is there something I’m missing here?

Yes, that summarizes it pretty well!

But, just to clarify:

  1. Apart from the edge case where ChangeBroadcaster merges two calls, is the pattern described above (using the changeListenerCallback) guaranteed to catch every change between recording/non-recording state?

  2. Obviously things will get messy if the user moves or deletes the recorded audio file, but what I’m wondering is if the Tracktion Engine will move or delete the recorded file after recording has stopped! So provided only standard recording is used, can I count on the file being around after recording has stopped?

And an additional question:

Are you sure adding an “observer” plugin that sends the data to your cloud service isn’t a more reliable approach?

Even if I don’t want to do it in this case, it may still be a very useful in other situations – but in this case, how would I synchronize the data? Is it e.g. possible to get the stream time for when recording began in a reliable way?

I think so… As long as you actually stop the playback when you stop recording. If you stop an individual InputDevice i.e. a punch in/out operation the change won’t be broadcast (as it’s not a change in the TransportControl).

It might be that we can add a more explicit TransportControl::Listener callback if there’s an unambiguous one that would be beneficial in a wide range of uses.

Again, I think so. I think once a recording starts, the file won’t locations.
But I’m hesitant to guarantee this as I can see useful features in the future where a temp intermediate file is recorded to and then that is chopped up once the recording ends. One reason for doing things that way would be to ensure multiple track recording have the exact same length when stop is pressed.

This stuff is kind of straightforward with WAV but when you deal with other formats like FLAC, it gets tricky to make guarantees about what happens.

Good point. I don’t thing you could at the moment, at least not at audio block level. It depends on how accurate you need this to be really?

Ok, I’ll continue along this path for now to see how well it works in practice.

Thank you so much for your time!

After spending a few more hours on this matter, I keep coming back to the issue that I would like to know the (stream) time for when playback and/or recording started. Having that single piece of information would make it possible to synchronize various things in a much more reliable and convenient way, I believe.

It might be that we can add a more explicit TransportControl::Listener callback if there’s an unambiguous one that would be beneficial in a wide range of uses.

That would indeed be useful, especially if that callback would receive something like an action identifier (playback started, recording started, playback stopped, etc) and a timestamp for that action!

Ok, I’ll think about this. The reason I’m hesitant is that we’re currently transitioning to a new playback system and I’d like to fully remove the old one before adding new features.

Are you aware of TransportControl::getTimeWhenStarted? That might be enough to determine the start time of a recording?

Ok, I’ll think about this. The reason I’m hesitant is that we’re currently transitioning to a new playback system and I’d like to fully remove the old one before adding new features.

Fair enough! I’ll be looking forward to it! :slight_smile:

Are you aware of TransportControl::getTimeWhenStarted? That might be enough to determine the start time of a recording?

As I understand it, getTimeWhenStarted() just returns TransportState::startTime which is the point on the Edit timeline from which the playback started (?)

transport.position = 4.0;
transport.play (false);
// transport.getTimeWhenStarted()   =>  4.0

In order to syncronize things with the “outside world”, I believe I need some kind of reference such as the value of streamTime when the first block of the playback/recording was processed…

BTW: DeviceManager::getCurrentStreamTime seems pretty useful in this regard but the documentation says that it should not be used – why not?

I think getCurrentStreamTime will be removed when the old engine gets removed.
I need to think about this a bit more when that happens as the syncing and “stream time” will be different notions.