Adding to a buffer (am i blind?) yes i am! [aka FIXED]

hi again.
i’m going crazy over this little problem.

first code kind of works and produces “normal” sound,
but i want to add to the bufferToFill, not replace it, wich i try in the second code.

getNextAudioBlock(const AudioSourceChannelInfo & bufferToFill) {

	if(finished)return;
	
	finished = true;
	for(int voiceNo = virtualVoices.size() ; -- voiceNo >= 0; )
	{
		if(virtualVoices[voiceNo]->finished)
		{
			continue;

		}
		else
		{
			finished = false;
			virtualVoices[voiceNo]->etNextAudioBlock(bufferToFill);
		}
	}

}

[code]
getNextAudioBlock(const AudioSourceChannelInfo & bufferToFill) {

if(finished)return;

finished = true;
for(int voiceNo = virtualVoices.size() ; -- voiceNo >= 0; )
{
	if(virtualVoices[voiceNo]->finished)
	{
		continue;

	}
	else
	{
		finished = false;
		AudioSourceChannelInfo ci;
		ci.numSamples = bufferToFill.numSamples;
		ci.startSample = 0;
		ci.buffer = new AudioSampleBuffer(bufferToFill.buffer->getNumChannels(),ci.numSamples);
	

		virtualVoices[voiceNo]->getNextAudioBlock(ci);

		for(int chan = bufferToFill.buffer->getNumChannels(); -- chan >= 0; )
		{
			bufferToFill.buffer->addFrom(chan,bufferToFill.startSample,*ci.buffer,chan,0,bufferToFill.numSamples);
		}
		
	}
}

}[/code]

i could swear the exact same thing worked in another place.
but this time it gives me just horrible stuttering :cry:

it drives me mad, it looks so simple , i tried to mess with different offsets, etc.:frowning:

i admit that i suspect the resampling source reading from this source to have problems, but then, this source just continuesly streams, so should be no problem for the reampler. also both codes should produce the exact same output under the conditions i test them. (at least i want them to, the adding instead of replacing would only make a difference in anohter situation)

thanks for listening :slight_smile:

jm (D3CK)

That’s a very convoluted way of doing it. Why not just change your voice code so that it accumulates, rather than trying to add it all afterwards via a separate buffer?

my voice code uses just a AudioFormatReaderSource, wich replaces the passed buffer afaik, so i have to add myself, so i could at most
move the code inisde the voices getNextAudioBlock method (i catually had it there before, same problematic, i just moved the code to rewrite it with a fresh mind.). this does not change anything. i’m in the proof of workflow concept phase
, so i just want to make a mock up as quick as possible, so i use what
juce offers (AudioFormatReaderSource, ResamplingAudioSou
onxcv rce)
i’ll write my owen buffer/file reading audio player sooner or later, but this has nothing to do with the fact that i seems not to be able to make a temporary buffer and
add it to the outout buffer :(.

D3CK

p.s.: i can not access your site through my new ISP (i moved flat), i havt to go over web proxy, waht a pita :frowning: any indea?

ah! my isp formwards me to rawmaterial again :slight_smile:

any tips regarding my problem?

i’ll have to rewreite stuff soon ayways i guess.

is there an easy way to view a PositionableAudioSource in an AudioThumbnail ? seems to be built for files only? the thing is, in my application i want to thread memory located audio and file located audio similiar most of the times.
would have been good if PositionableAudioSource would have been the core absraction layer wor waveform related stuff me things. :smiley:

Yes, I guess PositionableAudioSource might have been a good base to use there.

I had the same problem with my ISP last week - apparently it’s related to their border gateway protocol (no idea what that is). Very frustrating to try to convince a support drone that you’re not mad, that’s there’s nothing wrong with your browser settings, and that it’s actually their fault, despite the fact that they can’t see any problem…

can you please help me with this?
i spent very long time trying to fix it (it bugs me, seriously).
and since i really want to use audioFormatReaderSource,
i have to add to the output buffer anyways.
i’m planing to buy a commercial licence from you soon, do i get support then? :frowning:

thanks

D3CK

Not for bugs in your own code!

Really, it’ll be something simple. Very hard to tell from the code you posted, but it’ll be an offset that’s not correct, or channels that aren’t being cleared before you add to them, or something like that. And what you’re doing there is horribly inefficient, allocating new buffers each time - maybe it’s just loading it so heavily that it stutters…

Really, it’ll be something simple. Very hard to tell from the code you posted, but it’ll be an offset that’s not correct, or channels that aren’t being cleared before you add to them, or something like that.

i thought of all that :confused: (cleared the buffer, copied the buffer,copied the buffer with different offsets , treid different offsets when adding to the buffer…etc…debug the contents of the buffer on different settings…so many things i tried…its driving me nuts!)
the strange thing is, my own created buffer is filled up in a different way than the buffer that comes passed from the resampler !? i just pass the buffer to a AudioFormatSourceReader, how would you add the output of a AudioFormatSourceReader more efficient to the output?

And what you’re doing there is horribly inefficient, allocating new buffers each time - maybe it’s just loading it so heavily that it stutters…

the cpu load is very low(its simple sample playback, i could do same procedure x10 and still had headroom), and its a proof of workflow concept anyways, so i dont care for cpu atm. i’m testing/patch create a beat synth atm, and THAT thing has cpu load, and its already in beta/alpha! everything will be highly optimized in the end.
…but i get the feeling either my explanations are too bad, or nobody cares :/. i most probably posted all relevant code , as the first code works (kind of) but th second doesn’t , so the error must take place in the code i have written, or there is an error in the buffer handling mechanism of one of these AudioSources (written by you).

well, i guess i have to write a wave reader myself then :confused:

thanks

D3CK

you may say that the cpu load is low, but you realise that you’re doing memory allocation in an audio callback - to shrug that off as ‘oh no that’s not the problem’ isn’t a sign that you’re trying even the obvious things! i’m not saying that it is your problem, but it is definitely the absolute first thing i would address.

shouldn’t this be:

			ci.numSamples = bufferToFill.numSamples;
			ci.startSample = bufferToFill.startSample;

?

and in the for loop:

			bufferToFill.buffer->addFrom(chan,bufferToFill.startSample,*ci.buffer,chan,bufferToFill.startSample,bufferToFill.numSamples);

I ran into problems assuming startSample values would be zero and got stuttering too…

ah!! much thanks… but unfortionately i tried that(and a lot of other variations) too :frowning: i’m relly not as stupid as the code may suggest it :wink:
this is, again, just to mock up the proof of concept, so very quick and dirty.

@haydxn
i really don’t wanted to come of as i know things better, but of course i knew that creating a buffer each time is unefficient, and its ok (and appreciated it) to tell me so. but in the end this did not point me into a new directions to change my mind or try new things at all. telling me the said thing twice did not help either ;). again ,i want to and will do optimization later, and then, i’ll try to pass just one pointer to the output buffer through the whole software.
i did this buffer creation and copy thing in another section of my plug in too, and the cpu load is 3% max for the whol plug in (and no problems there!), so i where not worried that this uneffiency may couse stutter.

but… you finally brought a new aspect to think about to my brain…
memory allocation!!
afaik memory allocation is made in conjunction with the OS right?
so this might indeed block the buffer for too long,without being directly cpu hogging!
i’ll invetigate this matter!
i’m not so good at low level stuff, but this is exactly why i want/need JUCE!
thanks again haydxn! you were of more help then you intended i guess!
this is precisely the response i wanted to have in the first place (of course you all could not have known ´;))

peace

new code:

getNextAudioBlock(const AudioSourceChannelInfo & bufferToFill) {
  // Bouml preserved body begin 0007550D

	if(finished)return;
	
	finished = true;
	for(int voiceNo = virtualVoices.size() ; -- voiceNo >= 0; )
	{
		if(virtualVoices[voiceNo]->finished)
		{
			continue;
		}
		else
		{
		finished = false;
		tempBuffer.numSamples = bufferToFill.numSamples;
		tempBuffer.startSample = bufferToFill.startSample;
		virtualVoices[voiceNo]->getNextAudioBlock(tempBuffer);
		for(int chan = bufferToFill.buffer->getNumChannels(); -- chan >= 0; )
		{
			bufferToFill.buffer->addFrom(chan,bufferToFill.startSample,*tempBuffer.buffer,chan,bufferToFill.startSample,bufferToFill.numSamples);
		}

			
		}
	}

  // Bouml preserved body end 0007550D
}

tempBuffer is now allocated once in the constructor.
offsets as suggested (i’p pretty confident in what i’m doing there to begin with, i mean, i don’t program software by guessing ;))
but still ! same problem!
again, this,:

getNextAudioBlock(const AudioSourceChannelInfo & bufferToFill) {
  // Bouml preserved body begin 0007550D

	if(finished)return;
	
	finished = true;
	for(int voiceNo = virtualVoices.size() ; -- voiceNo >= 0; )
	{
		if(virtualVoices[voiceNo]->finished)
		{
			continue;
		}
		else
		{
					virtualVoices[voiceNo]->getNextAudioBlock(bufferToFill);
		
			
		}
	}

  // Bouml preserved body end 0007550D
}

works (sounds “clean”), the “voices” overwrite each other thought :(, wich is what i try to avoid in the end.
a “voice” is just a wrapper around an AudioFormatReaderSource (because i want to play any PositionableAudioSource through this mechanism).

there is something similar i do in the SynthesizerVoice code:

renderNextBlock(AudioSampleBuffer & outputBuffer, int startSample, int numSamples) {
  // Bouml preserved body begin 0006BC0D

	tempBuffer.startSample = 0;
	tempBuffer.numSamples = numSamples;
	if(this->plays && !zone->getMuted() && zone->isPlaying()){

		zoneReader->getNextAudioBlock(tempBuffer);

		for(int chan = outputBuffer.getNumChannels(); -- chan >= 0; )
		{
			outputBuffer.addFrom(chan,startSample,*tempBuffer.buffer,chan,0,numSamples,this->velocity * zone->getVolume(chan));
		}
	}
	else
	{
		stopNote(false);
	}
  // Bouml preserved body end 0006BC0D
}

again, this works perfectly fine. and here i do exactly what i try to do in the other code!
the “zonereader” is just a wrapper around a “ResamplingAudioSource”, wich reads from the problematic code.
(pleas don’t question why there is a wrapper around the ResamplingAudioSource. i tested a pattern wich proved to be bad,
but decidet to let this wrapping layer implelemtned, as i might need an abstraction layer at this position later anyways.

i can handle this code easyly (just changed the buffer to be allocated once at construction time in this place too), ther is no room for speculations (to my humble view at least), i’m confidetn in what i’m coding there, this is why i need your help!
i need new perspectives, as i really can’t explain whats wrong here :frowning: i have not the slightes idea what i’m doing wrong.

btw. i hear the question coming:" why all the fuzz? two voice managament systmes?"
as in flavour of the last responses (the question why i dont put the stuff into my voice and add it to the buffer there to begin with, for example;) .with the temp buffer it is now even a tad more memory friendly [not that i care that much about memory],as i can share the same temp buffer along all voices!)
i just used the synhtesizer class to begin with to manage keyboard mapping (i’ll optimize it in the end) but since i’m doing retriggered sample playback, i want to “declick” transitions betweend low frequency heavy material. sub kicks for example to horrible klicking when retriggering.
i could have used another synth, but i juts wanted to try (as opposed to my “avoid optimizations at the beginning” philosophy) to make a barebone voicemanagement for “declicking voices”.
i must admit i could have asked how to implmement clickles voice steeling, but i had the feeling i would ask for too much…may i ask now blush? and also, how would you make reverse buffer playback? i mean i could do it i guess, i’d just “iterate” through th epositionable audio source backwards and reverse the buffers. but any, but maybe one of you wan to share thei stuff?

thanks!

D3CK

p.s.: maybe one would like to move this into audio section :wink:
i’d happily get a moderator for this board, as i may be very active here in the next time anyways :wink: i’m not natively english though (i guess u already knew)

Erm… Did you say you’d tried calling bufferInfo.clearActiveBufferRegion() at the start of your method? The buffer is full of garbage when you get it, so you need to clear it before layering all your voices on it.

A more efficient way would be to render your first voice directly into the buffer instead of clearing it, then add the subsequent ones to it.

of course! thanks! i had to clear the section of the buffer wich is “declared” in the AudioSourceCahnnelnfo!
when i cleared the whole buffer, i heard gaps instead of the garbage! this is because the resampler gives me a buffer with half garbage and half content it needs! man, i really did not think of this! i appologize! this was really kind of dump blush now that i know the solution!
and thanks again! i learned soemthing for the future i hope!
ah and thanks for the optimization tip too! you made my day!
now if i only could fix the vst hosting problem, but i did not chew it as long as this one yet, i’m full of hope on this!
thank again and again!!

peace!

D3CK (jan)