Ask for RGB, get back ARGB

I ask JUCE for an image of type RGB, and on the Mac I get back ARGB! Quite an unexpected result!!

Here’s my code

m_work = Image (Image::RGB, workBounds.getWidth (), workBounds.getHeight(), false);

And here’s the JUCE code it eventually calls:

juce_mac_CoreGraphicsContext.mm

ImagePixelData::Ptr NativeImageType::create (Image::PixelFormat format, int width, int height, bool clearImage) const
{
    return new CoreGraphicsImage (format == Image::RGB ? Image::ARGB : format, width, height, clearImage);
}

Yep, I’m afraid that’s just the way that OSX rolls. Packed RGB doesn’t work.

This shouldn’t have anything to do with OSX. If JUCE is cross platform then it needs to be possible to create an Image of type Image::RGB. When you say “packed RGB doesn’t work” I think what you really mean is that you can’t make calls to CoreGraphics image operations using packed RGB. But there’s no conceptual reason why JUCE cannot simply allocate a piece of memory in the style requested by the user.

Am I missing something? Can’t JUCE just create a 32 bits per pixel CoreGraphics image? And possibly fill the alpha channel with 255? Or perhaps just allocate a piece of memory and return it?

Its not practical to develop cross platform code that manipulates juce::Image objects if Image::RGB isn’t supported on the Mac.

Is this what ImageType is for? Can I just pass SoftwareImageType() in the constructor?

Yes, of course. It won’t be a CoreGraphics image, but it’ll be RGB.

Yes, of course. It won’t be a CoreGraphics image, but it’ll be RGB.[/quote]

Okay, I got my app to finally show its window on Mac OS, using SoftwareImageType() in the Image constructor call. It sets the software renderer using setCurrentRenderingEngine() and a custom LookAndFeel. Inside paint() it uses dynamic_cast to get at the low level context and retrieve the image being drawn on.

Can you please enlighten me on how the images work on the Mac? You’re saying that CoreGraphics (what’s Quartz) only supports premultiplied ARGB? So in order to get to the screen you simply must have premultiplied ARGB? The Image in the associated low level graphics context passed to paint() will ALWAYS be of type ARGB on Mac OS / iOS? What are the alpha values set to, are they always 255?

Also, I think the behavior of silently changing RGB to ARGB in the CoreGraphics image type is non-intuitive. It should jassert() instead.

I can’t honestly remember exactly what the underlying reason for this was, but I remember it caused me some headaches.

Well, it’s just expressing a preference for the format. If you write code that accesses the Image data via the Image::Bitmap class, and uses PixelARGB and PixelRGB, then you shouldn’t have any problems, no matter how the image is stored internally.

But it is a problem, because the image type that the Image returns in Image::getType() might be used to make decisions.

For example, look at my radial convolution code. It takes an Image as a parameter and returns the blurred Image as the result. The treatment of edge pixels is different depending on whether or not the image has an associated alpha channel - a decision which is made by looking at the image type. Passing an ARGB image to the convolution code produces a different result than when passing an RGB image.

Or even more troubling, I blur single channel images all the time, for doing various lookup table effects. But it is not possible to allocate a CoreGraphics single channel image (it generates an error at runtime). Should my convolution code always create a software image type? Then you can’t using it to draw blurred RGB images (or blurred ARGB images on OS X / iOS).

This is a problem

It’s only a problem if you make assumptions about the binary format of the image data, and if you ignore the Image::BitmapData::pixelFormat type, and if you don’t use the classes that I’ve provided to manipulate the pixels. If you choose to ignore all that stuff, then yes, it’s a problem: your problem!

I’m not ignoring any of that stuff, and I’m using Image::BitmapData. You didn’t answer the question. Given the function:

Image createConvolvedImage (Image sourceImage)

if the following code executes on the Mac:

Image sourceImage (Image::RGB, ....);
...
image resultImage = createConvolvedImage (sourceImage)

Then createConvolvedImage will see an image of type ARGB and produce the wrong result.

Here’s the other problem. Given the following code:

Image mask = Image (Image::SingleChannel, ...);
Image result = createConvolvedImage (mask);

In the implementation of createConvolvedImage(), which ImageType should it use? It can’t use the core graphics / native image type on Mac OS / iOS because it doesn’t support single channel. But if we always use the software image type then the result is less optimal when getting called with ARGB / RGB images. Should there be a special case for single channel images, just because of a platform peculiarity? I don’t think so.

FYI, here’s a code extract of the function in question;

Image RadialImageConvolutionKernel::createConvolvedImageFull (Image const& sourceImage) const
{
  Image result (sourceImage.getFormat(), dw, dh, false);
...
  const Image::BitmapData srcData (sourceImage, Image::BitmapData::readOnly);
...
  const Image::BitmapData destData (result, 0, 0, dw, dh, Image::BitmapData::readWrite);
...
  switch (srcData.pixelFormat)
...

In what way is this making assumptions about the binary format, or ignoring the Image::BitmapData::pixelFormat?

LOL Jules, you broke your own rules in juce_ImageConvolutionKernel.cpp!

[attachment=0]1.png[/attachment]

Assuming the number of colour channels based on pixelStride? Tsk tsk!

This breaks when passing a single channel software image, like the ones that I generate when extracting an alpha channel as a single channel Image:

/** Presents an Image channel as a separate Image.

    This creates a new @ref Image which references one channel in the
    specified @ref Image.

    @ingroup vf_gui
*/
class ChannelImageType : vf::Uncopyable
{
public:
  /** Create an Image that references a channel in the source image.

      The image type will be Image::SingleChannel, or Image::RGB if
      `channelNumber` is -1.

      @param sourceImage The image to retrieve a channel from.

      @param channelNumber 0 based channel number, -1 for all RGB channels.

      @return The resulting Image.
  */
  static Image fromImage (Image sourceImage, int channelNumber);

I don’t understand why it shouldn’t be possible to create CoreGraphics RGB images? Just make the CGImage with the following flags:

kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little

Not sure about the order of A,R,G,B though. Linestride will be 4 - which is a good thing IMHO. I think linestride should always be 4 by now, on all platforms. It’s the easiest way to convert ARGB to RGB anyway (ARGB premultiplied to RGB, no conversion, and the other way around would maybe just consist in or’ing all 4-byte unsigned integers with 0xff000000 or 0x000000ff etc…)

[quote=“zamrate”]I don’t understand why it shouldn’t be possible to create CoreGraphics RGB images? Just make the CGImage with the following flags:

kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little

Not sure about the order of A,R,G,B though. Linestride will be 4 - which is a good thing IMHO. I think linestride should always be 4 by now, on all platforms. It’s the easiest way to convert ARGB to RGB anyway (ARGB premultiplied to RGB, no conversion, and the other way around would maybe just consist in or’ing all 4-byte unsigned integers with 0xff000000 or 0x000000ff etc…)[/quote]

Can’t remember the details now, but I’m pretty sure it wasn’t possible. I had to go to a lot of trouble to work around this, so I would certainly have tried all the obvious stuff first. (Of course it could have been a PPC or 10.4 specific problem that has since gone away).