How can I create proxy when clip plugins are added or a clip plugin's parameters are changed?

I’m in personal project, and creating a simple Audio Editor. A software with one Track and one Clip for each Edit. Users edit a source file, cutting it or adding effects to a certain region, and render the result to a file.
If you have ever used one, please assume Magix’s SoundForge or iZotope’s RX series.

As I describe at title, I want to create proxy file every time clip plugin’s state has changed, and plugin has been added (preferably) at a background thread.

What I wrote was…

//adding a plugin to the clip
if (auto plugin = showMenuAndCreatePlugin(transport.edit))//I made a same member function with PluginDemo's 
    if (auto track = te::getAudioTracks(transport.edit)[0])
        if (auto clip = dynamic_cast<te::WaveAudioClip*> (track->getClips()[0]))
            if (clip->addClipPlugin(plugin, select))//"select" is class member SelectionManager
                clip->changed();//this is where I get stack.

I called clip->changed() praying it will create proxy and replace my AudioClip's currentPlaybackFile at adequate timing. But nothing happens.
How can I achieve like that?

Following is my investigation, so please note that it maybe totally wrong.
AudioClipBase creates proxy at timerCallback() whose timer was triggered by beginRenderingNewProxyIfNeeded() or createNewProxyAsync().
In my case, it was properly triggered.
But newProxy was the same with originalFile


I think getPlaybackFile() should return proxy file.
Going to the definition of getPlaybackFile(), I don’t use timestretch, so timestretched must be false, so af.getInfo().needsCachedProxy should be true.

How can I get Clip's playbacking AudioFile's reference and set its info.needsCachedProxy to true?

I’m not quite sure what the question is… If you add a plugin to a clip, the clip’s output just gets played back through that, it doesn’t create a new proxy file when you add a plugin.
(There is ClipFX which do this but that is a different feature).

OK, allow me to ask the same question from different angle.
At PitchAndTimeDemo, when I change pitch or tempo, it creates a file whose tone is changed from original one and store it to temporary directory, and the currently playbacked is swapped.
I want to make a function like that, which is triggered at when a clip plugin added or a plugin’s parameter has changed, create a temp file with effect, and swap. Can I?

There is ClipFX which do this but that is a different feature

Is this a feature to load a plugin and add effects?
If so, how can I load a plugin?Could you show me an example?

Thanks.

Can I ask why you want a proxy file?

I’ll follow up with a Clip Effect example shortly.

Thanks! It really helps me!

There are mainly two reasons why I create proxy.

  1. For making history of my operation. At every operation, whether it is what I expected or not, I wish there had been a history. I use it for backup and for checking the results at each state.(So preferebly, the temp file’s name should be like “2020_03_20_CertainPlugin_xxxParameter_to_0.5” or something)

  2. For speed. I’m assuming to edit REALLY long audio file. At worst case, 100 hours audio file! When I processed such file, it took hours to check even after I properly set all plugins’ parameters. And, it also took hours to render. So I’m long for an Audio Editor which renders a file automatically using surplus CPU(or GPU) resource while I’m playbacking for check.

Maybe you think it’s not “proxy” but “result” or “rendered”. Yes, maybe that’s it.
I found Tracktion Engine’s proxy function seems to really match my use! Even if there are some things to fix (it is not natural use case, so there should be), I can fix it (because it is open source! Thanks!!!). So I’m now trying to use it.

I don’t think our proxy system is what you’re after. Its sole purpose is to create a time-stretched wav version of the source file so we can memory map it and read it back quickly (the second use case if to create wav versions of compressed e.g. mp3 file for the same purpose).

If you really do have long files, rendering them every time someone makes the smallest change is not a very user-friendly workflow.

For back-up, you probably want to actually render the track as that will contain all the plugins etc.

Perhaps if you describe the kind of application you’re building and the feature set we can make some recommendations?

I don’t think our proxy system is what you’re after.

So sad :cry:
But thank you for your advice! It helps my coding.
I’ll keep learning!

If you really do have long files, rendering them every time someone makes the smallest change is not a very user-friendly workflow.

Yes, I think so too. So what I planned was user can change the timing of making temp file from “preference”, something like “every time parameter changed”, “10 second after from last change”, or “user-defined (when you hit make temp button)” or so on. Implementing Tracktion Engine’s proxy system was my first step to make “every time parameter changed” system in my plan, but anyway I will reconsider it. I notice I should have said like that at first.

For back-up, you probably want to actually render the track as that will contain all the plugins etc.

OK, I will do.

Perhaps if you describe the kind of application you’re building and the feature set we can make some recommendations?

Thank you!
This is my rough concept of my project. It contains what I said before, but I wrote them for the sake of clarity.

  • I’m making AudioEditor. Each Edit has only a Track and a Clip.
  • Its main workflow is… Import a source file → cut selected regions → apply effects to selected regions(or to a whole imported audio data) → export(or render) the result to a file.
  • It assumes editing really big file, dealing around 100 hours audio data. So managing vast amount of data efficiently is important.

Please don’t mind what I have said like making proxy. It was one of my trials and errors. What I primarily require to the software is…

  • Batch render ( not on some files, but many effects batches on a file )

  • Preserving each results of that plugin-chain processed

  • Comparing the results.

Let me know if you come up with any ideas.

Well one thing you have to remember if you’re adding effects is that you can’t play back and render through them at the same time as you only have a single plugin instance. So you will never be able to do some kind of background rendering as backup, it will have to be modal.

The other thing to remember is that rendering through plugins can take some time. If you have to render 100 hours of audio every time the user makes a change, no matter how often that happens, it could take a long time (many 10’s on minutes).

So probably the best approach would be to render sections of the Edit when they are changed and stitch them together afterwards.


But I have to ask, why do you even need to continually render? Surely the Edit is a “session” and the user can choose to “export” their editing results when they’re done?


The only other alternative would be to create a second copy of the Edit to render which they continue to use the first but that’s tricky and will use more memory, cpu etc.

So you will never be able to do some kind of background rendering as backup, it will have to be modal.

The only other alternative would be to create a second copy of the Edit to render which they continue to use the first but that’s tricky and will use more memory, cpu etc.

OK, though it is not as I expected, I think that’s really helpful because I don’t have to spend much time to wrong implementation. Thanks!

But I have to ask, why do you even need to continually render? Surely the Edit is a “session” and the user can choose to “export” their editing results when they’re done?

Well, I don’t know if following is good explanation or not…but I use the software for several specific scientific uses. Comparing and historying is very important. And the reason why non-modal or background rendering is important is that people often forget making history. So in my use case, though the result is of couse important, but the process to the “answer” is also important. So I asked can I do continual render at first.

The other thing to remember is that rendering through plugins can take some time. If you have to render 100 hours of audio every time the user makes a change, no matter how often that happens, it could take a long time (many 10’s on minutes).

Yes, I also think so too. But as I said, it’s for scientific use and I can afford a little good workstation. I don’t use server and of course don’t use super-computer, but I can manage large memory and disk. I can’t say “So, it’s OK”, but what you said “may be” OK than using weak laptop PC. Though I still have to consider tricks about rendering takes minutes and it will be ridiculous disaster unless I can manage proper timing when to render :expressionless:

Thanks.

Well if it was me and the user-space was limited (i.e. you have a rough idea of how this will be used) then I’d probably just create a second Edit and render that. You can do that in the background, it will be a “snapshot” of the current time and won’t interfere at all with the copy you’re working on.

How you manage that is then up to you.

This isn’t really appropriate in a DAW because it takes ages to load a session with hundreds of plugins all initialising etc. It sounds like you don’t have that limitation.

Very good point!
Thank you for your kind advices and for taking time.

I will try to implement like making snapshot and rendering at the background.
If I encounter problem, I will write here.