I’ll try to make myself clear, expecting help as soon as possible…
Actually, I have an AudioDeviceManager using DirectSound, which has an AudioProcessorPlayer as callback, which has an AudioProcessorGraph which contains AudioProcessors.
So far, as dsound device is opened, a continuous stream is sent to Player which sent it to Graph, which sent it to all (?!) processors (even those which are not connected ^^’) with connected ones ordered in output stream manipulation.
So far, processor which manages my inputs is first of all and set first sample to 1 (Dirac generator). Stream passes through effect plugins, and ends into a graphical display which must represent sample variation in function of time.
My problems are:
- I want that the sent stream contains only X samples
- I want that plugins are called only when input is received
I need advices to make this work in an efficient way, and to know to make good generators.
I think that using an AudioProcessorGraph is not the best way, because of continuous input/output sent between it and its processor nodes. But I don’t know what kind of object I could use in place of it.
What I want (in an ideal world) is that:
- my generator generates X samples
- effect plugins manipulate them
- graphical plugin displays them
- they are sent to an audible output
I am opened to all proposition (for realizing this, for sure ^^)