Does MemoryBlock::getBitRange work correctly?

How is the getBitRange method supposed to work in MemoryBlock?

I added a single byte with value ‘1’ to the memory block, then called “getBitRange (0, 6)” on it, to retrieve the value present in the 6 most significant bits… I thought I would have got a value of 0 for that, while it returned 1 as if it were taking the least significant bits

yes, that’s right… Reading the bits in reverse would be a strange way to do it, don’t you think?

so… the bits are counted and read from the “right” end of the number towards the “left”?

suppose I append a byte 0x01 then a 0x02 to the memory block, thus forming the block 0x0102,

calling getBitRange (0, 4) on that block, what should return?

It’d be the first 4 bits of the first byte, so 1.

i’ve been struggling with this for two days i think, so i wrote myself a map for the first 12bytes since i need the bits for them later on, here is what i came up with

/*
 *   [07] [06] [05] [04] [03] [02] [01] [00] || [15] [14] [13] [12] [11] [10] [09] [08] || [23] [22] [21] [20] [19] [18] [17] [16]
 *   [31] [30] [29] [28] [27] [26] [25] [24] || [39] [38] [37] [36] [35] [34] [33] [32] || [47] [46] [45] [44] [43] [42] [41] [40]
 *   [55] [54] [53] [52] [51] [50] [49] [48] || [63] [62] [61] [60] [59] [58] [57] [56] || [71] [70] [69] [68] [67] [66] [65] [64]
 *   [79] [78] [77] [76] [75] [74] [73] [72] || [87] [86] [85] [84] [83] [82] [81] [80] || [95] [94] [93] [92] [91] [90] [89] [88]
 *
 * Controller number LSB 16 - 22
 * Controller number MSB 40 - 46
 * Controller value  LSB 64 - 70
 * Controller value  MSB 88 - 94
 */

y’know, it’s a lot easier to just visualise them going from right to left when considering bits!

Yes, bits are usually numbered from right to left,while single bytes are usually visualized from left to right,that’s the origin of confusion in my case.

If it’s of any help to you, think in these terms: the current implementation of getBitRange returns bits numbered as if the whole binary content of the block was a little endian integer stored in memory, with least significant bytes in the lowest locations (here represented by the first elements of the block).

This is of little help in case you are doing stream operations on the single bits though. In that case, I’d like bit 0 to be the “leftmost” of the first byte (byte 0), bit 7 would be the rightmost of the same byte, bit 8 the leftmost of byte 1, bit 15 the rightmost of byte 1 and so on… what do you think about adding a second getBitRange-like method that follows this order?

Would others in this thread find it useful too?

…but that’s just not how bits are numbered! Bit 0 of a byte is always (1 << 0), bit 2 is (1 << 2), bit 3 is (1 << 3), etc. Nobody talks about bit 1 as being (0x80 >> 1) !

The way it’s currently done is little-endian, and the only other non-silly way it could be arranged would be big-endian, but that wouldn’t involve changing the bit-order, it’d just mean that the bits would be numbered starting from the last byte in the block, and increasing towards the start of the block. That’s a valid approach too, but would be confusing because when you change the size of the block, it’d change the numbering of all the existing bits.

You beat me to it!

The approach proposed may seem somehow intuitive to you, but it is quite simply wrong.

My intention was to use the MemoryBlock as a sort of bit array with bits numbered left to right like the elements in a vector or array

edit: and if the “quite simply wrong” thing was for me, I’d like you to try implementing the base64 encoding of a MemoryBlock with that getBitRange, then let me know if my request still look absurd to you.

I’m not saying that it’s not in any way useful to be able to view data like that, but it’s not really the way memory is treated, and doesn’t really fit the remit of this particular class. It’s the sort of job you’d have an auxiliary class to handle.

… also, MemoryBlock already has a toBase64Encoding function. If that doesn’t do what you want, perhaps you might want to look at its implementation?

That’s where I started from for my own implementation of the standard base64, and that’s where getBitRange is being used in the first place. I thought the custom base 64 encoding in JUCE was using a similar approach to the standard one, but that’s where I discovered this bit ordering difference between the two methods