Hi all! Working on a project that does some very heavy lifting during the getStateInformation() call. I’m wondering if Juce might provide any queryable context as to when the call is occurring. (IE - startup, shutdown)
In some daws (reaper, for example), getStateInformation() is called each time a parameter gesture is complete. The GSI call is happening on the messaging thread, which causes visible hangs at the end of each move. There might be options for optimizing the code in that routine, but I was just wondering what my options are for threading the work, specifically when GSI is called outside of setup/teardown.
(Obvisously these routines are crucial, and need to block and assemble immediate serialization data during plugin start and finish)
Would be caching the MemoryBlock returned by your getStateInformation an option?
But it’s surprising to me, that serialising your state is so complex, that it would take a measurable amount…
Hi @daniel - thanks for the reply! Yes - understand the surprise - and there isn’t much of a good way around it in this particular case.
I do cache the memory block - but this means that in hosts that only save when save is selected or on program exit, we can end up missing crucial information by being a step behind. (IE we return the cache, then rebuild it on a GSI request) I suppose I could rebuild the cache on a timer every so often, but that seems sort of like shotgunning the problem.
Reaper saves all the time - after each gesture, and it is really the source of my immediate pain. I’ve considered adding custom logic related to the hostname, but that is a slippery slope I think. Less shotgun, more sniper - but also it’s also not a universal solution to this problem. (I’m also not certain that there is one)
I have my state usually in a ValueTree alongside the APVTS (it has the public state member, where you can attach all your properties and hierarchical stuff. It saves quicker than you can say “Profiler”
Maybe you should consider putting more high level information into your memory block, rather than low level details… but I don’t know your use case.
I would advise against doing any host-specific hackey here. We save constantly too (when a parameter changes or the UI is opened/closed etc.). It’s the only way to get reliable backups for sessions.
How long are we talking here? Why don’t you just have a “dirty” flag that rebuilds the memory block when required (blocking if necessary) but if it isn’t dirty simply returns the old one?
But really, it sounds like you might be returning too much stuff or calculating it at the wrong time?
I use APVTS internally, for a large parameter set. (10-20k params - experimental) There are certainly a few ways to cut this chicken, and I agree that host-specific hackery is generally bad mojo. Modifying a big XML tree might certainly be better than regenerating it each time - though I’d eat the tree lookup time. If that’s the case, I can store references to parameter nodes by name in a hashmap. I think most the speed loss on this end is in the createXml() calls, which allocate internally IIRC - now, I can obviously optimize some things here. BUT - I’m wondering - in the case that a large serialization block is needed for a plugin, it seems like extended context might be useful.
I think it is time to scrap that XML demo for writing the parameter state for good. There is a performant way for ValueTree to serialise and deserialise:
Yes, tree.isValid() would return false, and you can have another go with the data…
The drawback with that migration, an updated session wouldn’t work with the old version any more… that is unavoidable.
On the edge between tree and binary port - or just binary to binary edge in general moving forward? I had a few issues with schema change in the past, that I worked around by manually managing the serialization process alongside of AVPTS. (I also store non-param out of band data)