Render to image with D2D

Hi everyone,

I wanted to gain some clarity on image rendering with D2D. I’m about to implement a goniometer, to show phase correlation. The art style is a lot of dots, which fade over time. Historically I’ve done this by rendering all new dots into an image, then the next frame, I reduce the alpha of the whole image, then render new dots into it. This cycle repeats each frame, resulting in old dots fading out and new dots being full alpha.

Does D2D change this in any way? I know it’s not good to do anything pixel by pixel, but is rendering into an image every frame an issue in its own right? How should this ideally be done?

Thanks in advance!

1 Like

That should be doable. Let me give this some thought.

Matt

How big are the dots? How many dots are you drawing at once?

I’d be curious about this too.

So far we’ve been using a SoftwareImage type to do all the pixel work (such as fading out pixels, setting a specific colour to pixels, or drawing lines on top of an existing bitmap) and render the resulting image into a Graphics context. Curious to see if this is still a recommended way in case we have a lot of individual pixel operations and D2D to work with.

Here’s one approach; I can think of a few others.

https://github.com/mattgonzalez/JUCEDirect2DTest/blob/pips/PIPs/ImageOverlay.h

The overall concept:

  • Create a software image
  • Paint the dots onto the software image
  • Create two Direct2D images called previousImage and composite
  • Clear the composite image
  • Paint previousImage onto composite with partial transparency
  • Paint the software image onto the composite with full opacity
  • Paint the composite image onto the window
  • Swap the composite and previousImage variables

This avoids the cost of mapping the software image from the GPU back to the CPU.

Hope that helps-

Matt

2 Likes