I wanted to gain some clarity on image rendering with D2D. I’m about to implement a goniometer, to show phase correlation. The art style is a lot of dots, which fade over time. Historically I’ve done this by rendering all new dots into an image, then the next frame, I reduce the alpha of the whole image, then render new dots into it. This cycle repeats each frame, resulting in old dots fading out and new dots being full alpha.
Does D2D change this in any way? I know it’s not good to do anything pixel by pixel, but is rendering into an image every frame an issue in its own right? How should this ideally be done?
So far we’ve been using a SoftwareImage type to do all the pixel work (such as fading out pixels, setting a specific colour to pixels, or drawing lines on top of an existing bitmap) and render the resulting image into a Graphics context. Curious to see if this is still a recommended way in case we have a lot of individual pixel operations and D2D to work with.