I’m trying to replace some parts of an existing Juce app with Tracktion Engine. Specifically the new tracktion graph, to kind of replace the juce graph. Well not replace it, render it is more precise.
I want to ask what the recommended way is for going between the different buffer classes?
The existing app uses Juce::AudioBuffer and Juce::MidiBuffer, these classes are all through the code.
Then in Tracktion there is stuff that is new to me like: choc::buffer::ChannelArrayView
I know the underlying data is fundamentally the same, but are there any functions for converting between these in an easy and efficient way?
They’re actually fairly different in their approach.
Choc has “view” and “buffer” classes, where the views don’t own the data. And they can handle either interleaved or separate channel layouts. So you could create a juce buffer from a choc one (which would involve allocating and possibly de-interleaving), but going the other way is trickier because you couldn’t e.g. create an interleaved view into a juce buffer.
So I’d say: if you need to convert, just write your own helper functions as necessary. It’s not difficult.
I don’t know if the tracktion engine has any such helpers… it’d be the logical place to put such functions, as it already has dependencies on both libraries.
Thanks for getting back to me.
I understand. I like the View concept.
I found this in traction:
/** Converts a juce::AudioBuffer to a choc::buffer::BufferView. */
inline choc::buffer::BufferView<SampleType, choc::buffer::SeparateChannelLayout> toBufferView (juce::AudioBuffer& buffer)
return choc::buffer::createChannelArrayView (buffer.getArrayOfWritePointers(),
The other way:
/** Creates a juce::AudioBuffer from a choc::buffer::BufferView. */
inline juce::AudioBuffer toAudioBuffer (choc::buffer::ChannelArrayView view)
return juce::AudioBuffer (view.data.channels, (int) view.getNumChannels(), (int) view.data.offset, (int) view.getNumFrames());
I will have a discussion with myself and figure out if it would be cleaner to replace all the audio in my app with tracktion, or if I should try this hybrid approach.