startRenderThread


#1

Hi,
I’m starting to play with an openGL component, and I’m rendering way too many objects for the default refresh rate to be applicable. For example, it might take seconds between the start of a render and a stop, and I want to be able to look things over before the next refresh–possibly even manually refresh with a button. I found a clue in the documentation:

virtual void OpenGLComponent::startRenderThread ( ) [protected, virtual] Kicks off a thread to start rendering. The default implementation creates and manages an internal thread that tries to render at around 50fps, but this can be overloaded to create a custom thread.
However, when I looked into the default function, I didn’t find any variables set to 50 or reciprocals of 50 or anything like that.
Also, the default thread that is started is protected, so I couldn’t play around with starting it at a different value.
I suppose maybe I could avoid starting it to do this manually, but is there a way to control the time between updates explicitly?
Or maybe I would need to create my own thread to replace the protected one?
Any suggestions would be appreciated.
Thanks,
Sean


#2

So you want to render in a thread still, but have total control over when you render? In the tip, it’s factored out to allow that (note that on Linux there’s a small error still to be tracked down).

You should not attempt to use the built-in thread - that’s for a trivial implementation with a steady update. But it’s set up to allow another thread to do the rendering.

Set useBackgroundthread in the constructor to true - that stops the paint method drawing.
Override StartRenderThread and StopRenderThread to use your own thread. If you don’t use a regular thread, you could leave them empty, although I’d strongly recommend you render on the same thread every time. Maybe you could make your thread wait for a marker to say it needs to render.

Render any time you like by calling renderAndSwapBuffers, or render yourself with something like:

[code] ScopedLock l (getContextLock());

if (! makeCurrentContextActive())
	return false;

// ----------- Custom rendering here

swapBuffers();		// Use OpenGLComponent's swap[/code]

FYI - the 50 Hz refresh is generated by the timer in the OpenGLRenderThread - 20 millisecs per visit.

Also - what the heck are you rendering that takes many seconds to render? I’m guessing you must be doing a fair amount of stuff that could happen in another thread, less often? It would be more usual to draw the ‘current’ stuff regularly, then update things when they change, rather than have a render pass that takes so long. I’m not saying it’s ‘wrong’, but you should understand that you’re doing it differently than the norm, so you won’t be able to leverage reams of accepted practice (such as the standard background loop).

Bruce


#3

Thanks for explaining the strategy. I get it now :slight_smile:
Regarding what I’m rendering, it’s massive amounts of 3D data.
Literally the visibly moving subset of data is taking this long to render.
I’m not doing typical openGL work–apparently.
I’m working on some complicated molecular models as a hobby, and I’m using the openGL as a tool to get in and use my eyes to evaluate performance and to assist with development.
I have a long way to go on my current projects, but you can see an example movie of a much less complicated system that is finished at the following link:
ftp://128.255.117.209
Use “bdmovie” for username, and “download123” for password


#4

That’s a fun movie.

I guess you’re using OpenGL as a renderer more in the style of a Pixar type long frame render system.

But - what happens if you use things as they are? If each frame takes multiple seconds to render, nothing bad should happen - the next frame’s rendering will just start right away. Unless you specifically need to leave a lot of dead time in between frame passes, it would still work.

What I’m saying is that your rendering wouldn’t get ‘restarted’ every 1/50 of a sec, it just wouldn’t run that fast.

Bruce


#5

Yeah, you are right about the status quo; I just end up without time between renders for control.
Even so much as one extra rendering can cause an apparent lack of usability when each render takes a long time.
Imagine clicking a button and waiting for a re-render after a long process and then the new image after another long render process.
It’s not going to be perfect in the end, but what I’m going to have to do is put in some options to control when I want to render, when I don’t at all, and when I want to render with a button click–probably just those few rending modes.
I see how to do that now. I had thought I was missing something obvious about what was already coded, but writing my own threads makes the most sense to me now.
Pixar must be a fun place to work :wink: I once sent them a resume. They wrote back immediately explaining that they get applications at a faster rate than the rate they can open them, so they don’t bother to open them…
I am impressed with the continual utility of openGL. I mean, for years now, graphics cards have jumped to new levels, computers have improved dramatically, and yet, openGL still does all the stuff it needs to do for me =-)