Direct2D and juce::Image

After reading another similar thread and the blog entry about Direct2D, I’m trying to understand when I can manipulate juce::Image. Let me ask few questions in this code to see if I get it:

   struct Whatever: public juce::Component
   {
      juce::Image prerender;

      Whatever()
      {
         // 1. Would this be ok? do I need to specify SoftwareImageType for this?
         prerender = juce::ImageCache::getFromFile(juce::File("C:/wav.png"));

         // 2. Are there performance problems with this?
         for (int i=0; i!=10000; ++i)
         {
            prerender.setPixelAt(4, 4, prerender.getPixelAt(0, 0));
         }
      }

      void paint(juce::Graphics& g) override
      {
         juce::Image image1 (juce::Image::RGB, 40, 40, true);

         // 3. This would be ok, image would be drawn once the og is destroyed
         {
            juce::Graphics og(image1);
            for (int i=0; i!=10000; ++i)
            {
               og.setColour(juce::Colours::red);
               og.drawRect(0.0f, 0.0f, 30.0f, 30.0f);
            }
         }

         // 4. This would be ok, since it doesn't use getPixelAt so it isn't forcing syncs. However, isn't recomended, since bandwidth cpu-gpu is limited 
         for (int i=0; i!=10000; ++i)
         {
            image1.setPixelAt(4, 4, juce::Colour::red);
         }

         // 5. Bad performance
         for (int i=0; i!=10000; ++i)
         {
            image1.setPixelAt(4, 4, image1.getPixelAt(0, 0));
         }

         g.drawImageAt(image1, 0, 0);
         g.drawImageAt(prerender, 0, 0);

      }
   };

Generally speaking, you should use a software image if you want to directly manipulate pixel data.

In case it’s not clear from the other thread, Images are now stored in GPU memory and are painted by the GPU. The CPU cannot directly access GPU memory, so if you want to edit the pixel data from your code the image data needs to be mapped from the GPU to CPU memory and then copied back again to the GPU.

Mapping the image data is handled by the JUCE Image::BitmapData class.You can see this in Image::setPixelAt:

void Image::setPixelAt (int x, int y, Colour colour)
{
    if (isPositiveAndBelow (x, getWidth()) && isPositiveAndBelow (y, getHeight()))
    {
        const BitmapData destData (*this, x, y, 1, 1, BitmapData::writeOnly);
        destData.setPixelColour (0, 0, colour);
    }
}

Note that it instantiates and destroys a BitmapData object every time you call it, so in your example code you’d be doing so 10,000 times.

You’d be better off using a software image and painting that, or creating the BitmapData object yourself and editing the pixel data using the BitmapData object. Then you’re only mapping the image once instead of once per pixel.

Matt

struct Whatever : public juce::Component
{
    juce::Image prerender;

    Whatever()
    {
        // 1. Would this be ok? do I need to specify SoftwareImageType for this?
        //
        // This will create a Direct2D GPU-stored bitmap; no need to specify SoftwareImageType
        //
        prerender = juce::ImageCache::getFromFile(juce::File("C:/temp/out.png"));

        // 2. Are there performance problems with this?
        //
        // Yes this will be quite slow
        for (int i = 0; i != 10000; ++i)
        {
            prerender.setPixelAt(4, 4, prerender.getPixelAt(0, 0));
        }
    }

    void paint(juce::Graphics& g) override
    {
        juce::Image image1(juce::Image::RGB, 40, 40, true);

        // 3. This would be ok, image would be drawn once the og is destroyed
        //
        // Yes; in this case the image data stays in the GPU and the image is
        // painted by the GPU
        {
            juce::Graphics og(image1);
            for (int i = 0; i != 10000; ++i)
            {
                og.setColour(juce::Colours::red);
                og.drawRect(0.0f, 0.0f, 30.0f, 30.0f);
            }
        }

        // 4. This would be ok, since it doesn't use getPixelAt so it isn't forcing syncs. However, isn't recomended, since bandwidth cpu-gpu is limited
        //
        // Actually this will also be slow; it will cause 10000 syncs
        for (int i = 0; i != 10000; ++i)
        {
            image1.setPixelAt(4, 4, juce::Colours::red);
        }

        // 5. Bad performance
        // 
        // Agreed
        //
        for (int i = 0; i != 10000; ++i)
        {
            image1.setPixelAt(4, 4, image1.getPixelAt(0, 0));
        }

        //
        // Use BitmapData instead; only sync once
        //
        {
            juce::Image::BitmapData bitmapData{ image1 , juce::Image::BitmapData::writeOnly };
            for (int i = 0; i != 10000; ++i)
            {
                bitmapData.setPixelColour(4, 4, juce::Colours::red);
            }
        }

        //
        // Or, create a software image
        //
        {
            juce::Image softwareImage{ juce::Image::PixelFormat::RGB, 40, 40, true, juce::SoftwareImageType{} };
            for (int i = 0; i != 10000; ++i)
            {
                softwareImage.setPixelAt(4, 4, juce::Colours::red);
            }
        }

        g.drawImageAt(image1, 0, 0);
        g.drawImageAt(prerender, 0, 0);

    }
};
1 Like

@matt thanks for the reply. Now I understand it better and I see why multiple pixel manipulation is really bad and why it doesn’t matter where you build/manipulate the image (inside paint() or outside).

A last question, I use a lot of juce::Image::rescaled, I think this isn’t as bad as pixel manipulation right? Since it would only create a new juce::Image in the GPU memory using GPU memory as source.

Image::rescale is handled by the GPU; it’s just painting an image onto another image. The rescaling is handled by the low-level graphics context.

Image Image::rescaled (int newWidth, int newHeight, Graphics::ResamplingQuality quality) const
{
    if (image == nullptr || (image->width == newWidth && image->height == newHeight))
        return *this;

    auto type = image->createType();
    Image newImage (type->create (image->pixelFormat, newWidth, newHeight, hasAlphaChannel()));

    Graphics g (newImage);
    g.setImageResamplingQuality (quality);
    g.drawImageTransformed (*this, AffineTransform::scale ((float) newWidth  / (float) image->width,
                                                           (float) newHeight / (float) image->height), false);
    return newImage;
}

Thanks! I’ll upgrade to juce 8 as soon as I update a bit my code

Is there anything worth mentioning about Images being stored on the gpu when it comes to heavy use of bitmaps in gui designs? The kind where there are potentially either many spritesheets (all states in a single image) or many single frames? Is there some kind of limit we might reasonably encounter where we would need to make a master image containing all data?

I ask because I know in openGL there are limited slots to store textures, and it’s common there to batch things together.

Hello

If you have to compute the content of the images you need to draw in your component, I believe it’s better to compute the content in a specific thread and draw the images only when they are ready. Thus do not compute in the paint method of your component.

1 Like

Using a single large bitmap instead of multiple small bitmaps is definitely a good idea.

Matt

There is a limit for the bitmap size in Direct2D.

Instead of a strip with insanely high width or height, put your bitmaps into a xy-grid (e.g. 16x16). Now neither width nor height will be very large and thus are guaranteed to work with D2D.

1 Like

Sure, you can do that when you design your software, but we have 100th of images like that, so changing that now is a huge task.

You could spend at most an hour or two writing a little Python+Pillow script to do it all for you in one fell swoop; I’ve written a few image manipulation scripts using those with zero prior knowledge [1] and got the exact result needed with a bit of doc reading and not much effort overall.

[1] OK, I knew my way around Python already, but it’s worthwhile learning if you have zero Python experience.