VST vs VST3 synth

I came across on a difference in VST2 vs VST3 synth behavior. When I use VST2 version everything is ok, but when I use VST3 the buffer between processing calling is not being cleared although in the VST2 version it seems to be.  (The result is beginning of first processing is ok, but it's getting to be a crap, you can imagine....) The solution is simple - to clear buffer before calling synth. Sorry for being naive :) but should both VST version have the same behavior? Either the way?

See the description:


    /** Renders the next block.
        When this method is called, the buffer contains a number of channels which is
        at least as great as the maximum number of input and output channels that
        this filter is using. It will be filled with the filter's input data and
        should be replaced with the filter's output.
        So for example if your filter has 2 input channels and 4 output channels, then
        the buffer will contain 4 channels, the first two being filled with the
        input data. Your filter should read these, do its processing, and replace
        the contents of all 4 channels with its output.
        Or if your filter has 5 inputs and 2 outputs, the buffer will have 5 channels,
        all filled with data, and your filter should overwrite the first 2 of these
        with its output. But be VERY careful not to write anything to the last 3
        channels, as these might be mapped to memory that the host assumes is read-only!
        Note that if you have more outputs than inputs, then only those channels that
        correspond to an input channel are guaranteed to contain sensible data - e.g.
        in the case of 2 inputs and 4 outputs, the first two channels contain the input,
        but the last two channels may contain garbage, so you should be careful not to
        let this pass through without being overwritten or cleared.

        Also note that the buffer may have more channels than are strictly necessary,
        but you should only read/write from the ones that your filter is supposed to
        be using.
        The number of samples in these buffers is NOT guaranteed to be the same for every
        callback, and may be more or less than the estimated value given to prepareToPlay().
        Your code must be able to cope with variable-sized blocks, or you're going to get
        clicks and crashes!
        If the filter is receiving a midi input, then the midiMessages array will be filled
        with the midi messages for this block. Each message's timestamp will indicate the
        message's time, as a number of samples from the start of the block.
        Any messages left in the midi buffer when this method has finished are assumed to
        be the filter's midi output. This means that your filter should be careful to
        clear any incoming messages from the array if it doesn't want them to be passed-on.
        Be very careful about what you do in this callback - it's going to be called by
        the audio thread, so any kind of interaction with the UI is absolutely
        out of the question. If you change a parameter in here and need to tell your UI to
        update itself, the best way is probably to inherit from a ChangeBroadcaster, let
        the UI components register as listeners, and then call sendChangeMessage() inside the
        processBlock() method to send out an asynchronous message. You could also use
        the AsyncUpdater class in a similar way.
    */
    virtual void processBlock (AudioSampleBuffer& buffer,
                               MidiBuffer& midiMessages) = 0;
 

Sorry and thanks! Since instrument has no input this description is pretty clear.

Jules apologies for another query. 

 

Could you possibly clarify how the Buffer should be filled when the plugin in Is A Synth and contains has no input channels ? 

 

I.e Using synth.rendernextblock as per the Demo Synth code. 

Just overwrite all the channels. It does explain this in the help for processBlock, doesn't it?

Hi Jules

 

Thanks for the reply. 

Is it at all likely that an incorrect use of the render funtion would crash the host when loading the AU ? (before attemping to play any sounds)

 

I just cannot for the life of me workout why the AU instrument seems to crash logic 8 everytime. 

 

Only appears to happen when I check the Is A Synth parameter in Introjucer. 

 

Again apologies for all the newbie questions !

I get the sense that you're asking these questions without actually having tried using a debugger..

Hi Jules

 

Thanks for your answers and aologies for all the questions. 

 

It turned out to be nothing to do with the code. I was having issues getting the debugger to work in various hosts as this was just crashing each time also. 

 

It turned out to be an issue in Xcode. 

 

The Introjucer settings had both the base and target SDK set to 10.8 but despite this xcode continually defaulted the overall project deployment taget to 10.9. I had missed that it was constantly doing this hence it crashing the host each time and not allowing me to debug. 

 

I am currently running 10.8.4 on my mac. 

 

I attempted to change this in Xcode settings but without fail it defaulted back to 10.9 and wouldn't allow me to set this. 

 

We have two machines here, the second being on 10.9 so I moved my project across and fixed through the core audio files etc as I had done previously and voila, everthing running as should be. Juce demo plugin set to Is A synth with the processblock function modified works as it shold and loads in the host. My own project also works as it should do. 

 

This may just have been an odd occurence on my machine and a total newbie c**K up but thought I would mention it incase other people fine Xcode 5 doing this to them when choosing a lower target in the Introjucer. 

 

Thanks again for all your help (can finally get those wavetables going again).