In the docs, PixelFormat::SingleChannel says < each pixel is a 1-byte alpha channel value., but if I draw the image I get a black and white image, as I’d expect. So single channel can be either alpha channel or greyscale image data, depending on how the image is used.

Image::convertedToFormat will always just grab the alpha channel when converting to SingleChannel. Can we get another function (or parameter) that converts to greyscale instead?


That sounds like a sensible addition. I’ll put that on the backlog.


Sorry to hijack that thread, but I think it is a natural follow up of this topic:

I think the Image class with the shared PixelData is perfect for UI work, which covers the usual JUCE use cases. However when thinking of video and image processing, two reasons speak against using it:

  • we should have more colour encodings, not only 8, 24 and 32 bit per pixel but also 64 bit (4 short ints per pixel, a.k.a “Billions of Colours”)
  • I don’t think we need all permutations, similar like the AudioFormatReader converting into one generic format (float buffers), having one internal image format would suffice. But the current 32 bit version cannot represent all formats.
  • Specific to video: the ephemeral nature makes the shared pixel data approach not a natural fit, although it is possible to work with (I implemented a working version for that)

So now my questions:

  • Should we start a parallel templated image version and add a fast painting routine to the graphics backend (Graphics and LowLevelGraphicsBackend)?
  • Or should we add the missing formats to the Image class?


@jules and @t0m, when I start designing the video/image processing engine, it would be invaluable to know your opinions, if we rather expand the Image::Pixelformat with a RGBA64 with 4 unsigned short ints or rather keep it separate from the Image class? Or maybe even using 4 floats (128 bits?) feels like overkill, but the maths will be simpler to write…

I am aware, that this opens a can of worms, but since for many image formats converting to the current Image is already a data loss, we should at least consider adding a full dynamic version, that we can use for the processing.

Wouldn’t it be great, if one day people could trade their FinalCut and AVID for a JUCE based NLE? (I am aware of my megalomaniac tendencies :wink: )


or Screenflow! Screenflow is cool, but it’s missing some key features on the audio-end of things that are “trivial” to add in JUCE.