I haven’t confirmed if this is the case on Ubuntu and macOS yet. Simply put - I have a basic plugin started with a couple components in there, where runtime peaks at 5.3 MB RAM in release with the given software renderer on Windows. If I edit the code to attach an
OpenGLContext, the editor’s RAM floats at 63 MB.
Seems pretty fat to me when OpenGL is typically really light.
Anybody else seeing this?
Looks like memory usage ramps up quickly when
renderFrame is first called. I’ll have to look more closely at this later this week.
I’m suspecting a lack of knowledge about GL as a whole on this forum for me to get crickets.
What I’m kind of gathering while quickly debugging is that there are way too many textures. Still investigating - I might be able to work out an alternative…
Most likely texture memory. If you allocate textures they will gobble up at least (width * height * bitsPrPixel) in memory.
Can’t say I’m the slightest bit surprised that a GL driver might allocate that much space for all the bloat around its context state. And if your card has a unified memory architecture, the pixel buffers themselves will count as main memory that gets allocated.
In a different conversation @reuk pointed me to OpenGLImageType.
Sounds to me, that each Graphics context might allocate at least one framebuffer of this type?
And the other way round OpenGLTexture, so maybe loading a normal Image ends up automatically in a texture, like @OBO suspected above?
But I don’t know, which will get triggered automatically, without writing openGL code yourself.
I don’t understand exactly: 500x600x32 would be 9 MB, which doesn’t come anywhere near my reported amount.
A full HD, 16:9, texture would be 63 MB.
When you’re looking at memory consumption, the headline figure is always misleading because of fragmentation and the fact that often chunks of it are unsused, paged out, and don’t matter at all.
That’s even more the case when you’re talking about a driver, and GL drivers are absolute beasts of complexity.
I bet there are countless types of pre-computed lookup tables, shader code compiler gizmos and a million other strange obscure things that need to be allocated, or at least reserved in advance, probably in blocks that might need special alignment or to be on page-table boundaries, etc.
50 meg of working space is peanuts for something like a game, and probably only a small percentage of what any modern app where you care about graphics performance would be using. Why does it bother you so much?
I’ve never seen this kind of bloat suddenly occur in every single GL app I write at my work, and I think you’re pointing the blame to everything else instead of looking at JUCE first to be absolutely certain it’s not the problem.
Maybe 50 MB isn’t much to you, but it is to me. I don’t like to have mystical garbage allocated inside my apps - I want to be certain there’s nothing I’m doing wrong.
Based on my past and current experience, my gut feeling is pointing to JUCE here and I will get to the bottom of it.
I’m fully willing to accept it’s a driver thing, an OS thing, or whatever outside of the framework or app/plugin - but I honestly don’t take what you’re saying at face value as I’ve never seen this come up before, but I’ve not used JUCE to commit to OpenGL based app development till now.
Thinking about this more, I realised this unfair question frustrates me. Not only are you more preoccupied in telling me to not care, by deferring any issues to everything else this kind of lazy thinking shows a lack of care and lack of responsibility for your framework.
I can definitely appreciate the total level of effort that has gone into this repository, but only thinking in broad strokes strictly for audio isn’t good enough in this case.
I’m thinking that, in order to avoid unnecessary politicking here on the forums to convince you otherwise, I need to write my own Component renderer that uses GL.
Um… no, I was genuinely wondering why the memory use was bothering you and was interested to hear your answer! My gut feeling was (is) that a GL driver allocating stupid chunks of memory is just kind of expected behaviour, and should make no measurable difference to performance, so I wondered if you actually had a use-case where it was causing problems.