In JUCE 6 it will be possible to choose to use int instead of size_t in the dsp class?

Go on … you know you want to :slight_smile:


FR for votes?

1 Like

I thought we kind of “voted” on this before in another thread. The C++ gurus say to use regular int where ever possible. So, add my vote for going to regular int.

Last time I looked at the FR voting the number one topic hadn’t been implemented after a year ;-). So I was resorting to nagging :slight_smile:


Personally, I tend to go backwards and forwards on this one, and I honestly don’t know what I’d choose right now if I was starting again from scratch.

In writing some new buffer classes for soul, I ended up choosing unsigned for most of the size/index values, because it pushed a lot of the sanity-checking responsibility out of the classes themselves and onto the caller, which seemed to make more sense.

And I also recently hit some signed-integer-overflow warnings from the UB checker, which also worried me a bit - essentially, using signed ints leaves you open to UB, whereas unsigned doesn’t… Whether or not that matters in practice is moot.

I’m also hazy about performance: I’m sure I heard rumours that signed was faster in some cases on intel, but the opposite was true on ARM… but again, I’ve lost track of the details there.

1 Like

Morning Jules!

Well, I’m arguing for consistency with the rest of JUCE :slight_smile: So I don’t have to constantly cast back and forth between AudioBuffers and AudioBlocks when interfacing with code that uses one or the other.

LLVM blog has some arguments that signed is faster:

And as far as I know signed overflow UB is only a problem if you let your signed integer overflow? :slight_smile:

Personally I’m far more likely that i’ll shoot myself in the foot with defined but boring behaviour such as:

uint32 samples;

while (--samples >= 0) ...

Does using unsigned really move the sanity checking onto the caller any more than it was before? The mistakes are surely just all in one direction…


In case of sanity-checking responsibility, I think they are both exact equal. Both can overflow, and sanity checking should be done with both types.

The point is the mixing of signed and unsigned types, which create extra problems which would simply not exist if we only work exclusively with signed-types.


No, not really - say you have a function takes an index and returns an element of some kind.

If the index is unsigned, then there’s only one error case, if the index exceed the upper bounds.

However, if the index is signed, there are two error cases: it could be below zero, or exceed the bounds, so the user has to know whether these two cases are equivalent or not. It just adds an extra detail to the function’s contract.

I’ll try and find it at some point but a Chandler Caruth talk from a few years ago showed that it’s precisely the UB of signed integer overflow that makes it quicker on at least Intel.

1 Like

…actually, just to make what I said a bit clearer:

The thing is that if you make a parameter signed rather than unsigned, you’re effectively telling the user that passing a negative value is possible, but generally that will always be UB, so it’s a confusing message.

Dave, I think that’s what’s covered in the LLVM blog post I linked.

This sounds more like a contract violation that an argument for having signed arguments. I’d expect this to be caught by an assertion (or C++2b “Contract Checking Statement”) than relying on the type. It’s basically the same case as the index being out of bounds?

Oh, and one other thing: say I’m writing a container and I let it take an int param. Internally, I may need to cast it to unsigned for some reason.

That creates a dilemma for a library author: do you add a check or assertion to make sure it’s not negative? Or just cast it and then when the UB checker finds an out of range value, this looks like it’s the fault of the library code rather than the caller? If you make it the caller’s responsibility to do the cast-to-unsigned then they’ll find those UB problems in their own code, and not blame the library.

The problems I’ve come across are when you actually are interested in negative times. Like subtracting the size of a buffer from some timepoint or even two buffer sizes. If that can end up in a very large value it makes no sense mathematically.



Yeah, absolutely. I’m not necessarily really arguing for unsigned here, just saying that a signed param makes the contracts less clear-cut than unsigned.

Right. I think I’m mostly thinking about indexes and sizes rather than more scalar values. Any kind of time should certainly be signed.

Indexes and time can be the same thing in DSP code.

I get the argument for stronger types as arguments but to be honest it’s a bit weak. If you have to deal with over out of bounds problems you might as well deal with negative out of bounds the same way. Having a narrow contract and easy ways of reporting violations is safer than someone with low warning levels accidentally passing in a negative number to a method and it silently being converted to something very large and out of bounds. That’s far more difficult to debug.
The user is going to have to do this check on the call site anyway or they’ll get these underflow wrap around problems when casting to a size_t just to pass to the methods.

And that’s the only argument in favour of unsigned types I can think of (and has been presented here) at the moment. Is there anything else apart from larger maximum values?

1 Like

I think the case is settled here :wink: Jury won’t be long deliberating…

Yeah… I think I’ve drifted back to unsigned a bit because I’ve been writing code that could be on platforms where size_t is 32 or 64 bit, so have preferred to use explicitly 32 or 64-bit sizes to keep them all the same. But then that means that you worry about overflow more with the 32-bit ones, as a signed int runs out pretty quick.

For the actual original question though: i.e. should JUCE use signed, then yes, probably for consistency it should.