Manually set pixels in framebuffer + VSync

I'm building a Midi visualiser.

Currently I'm drawing rectangles, but I'm thinking about plotting individual pixels.

How to set about doing this efficiently?

I could preallocate a block of memory that doesn't need to be changed until my component resizes.

Then every frame I have to zero it out and redraw pixels. And then somehow display it.

Maybe I should just use an Image?

I'm also a little concerned about my current solution using a (say) 60Hz refresh timer; it is sputttering as it scrolls by -- minuscule little speed bumps that are very noticeable, like suddenly it skips a frame.  This behaviour appears to be independent of the complexity within paint().  And it is a fast machine!  Is there some lower-level e.g. vsync callback I can/should attach to?

I vaguely remember some standard(?) double buffering technique where the graphics card uses 2 buffers, and periodically switches front and back -- as soon as this switch occurs there should be some callback that gives you a complete frame duration to populate the new back buffer.  On iOS several years back this was CADisplayLink. Has JUCE abstracted this concept?

I'm a little reluctant to go the whole hog and GL it, just because it's rather a lot of work and begins a slippery slope towards obsession. So I'm trying to trade off code complexity against performance.

π

Are you using the OpenGL renderer? I've found it much faster at least on mobile.

Have a component that derives from openglrenderer, attach a openglcontext to it, set setcontinouslyrepainting to true, set setwapinterval to 1. This will make any rendering happening in sync to the native frame updates. Notice that some platforms (osx) wont give you perfect updates in non-fullscreen windows.

Is it preferable to use setcontinouslyrepainting=true instead of a Timer?
My project only uses OpenGL on Android, not iOS or OSX. I use a macro to do the attaching to openglrenderer for the desired platform only.

It gives the best results and a completely stable framerate, as it is the graphics driver directly controlling the timing...... however, a couple of implementations I've seen (nVidia drivers) will spin waiting for the vblanc signal, ie. using 100% cpu - even though you don't do anything - and you dont get more frames out of it. The driver thread will yield, however, so effectively it doesn't use CPU as time will be given away to other threads while it is waiting. 

 

Still, it's 100% power usage and that may be problematic on mobiles/laptops.

I don't quite understand what you're saying there, seems to be a contradiction no?  You say the driver thread yields i.e. No extra CPU usage, but then you say it is 100% power usage.

I'm trying to minimise my power consumption as I want to target mobile. What fraction of devices get hit by this problem?

Is there any workaround, like a 1ms timer running on a high priority thread that checks for the vblanc?

π

​I run my animations using the Timer class at 30Hz. Gives quite acceptable results. Don't know if its the best way to do it but certainly works for my needs.