Can I get the pixels from Graphics?

I’m implementing some custom layer effects usable in Component::paint().

Is it possible to get the pixels underneath a component through the Graphics object? Or to get at the underlying Image to perform custom compositing / blending operations?

Nope. The thing is, the Graphics object will often be wrapping something that has no accessible pixels, e.g. when using CoreGraphics in a component paint() callback.

It would be nice if there was a way to set at runtime a property of the ComponentPeer that indicates all Graphics contexts should be software based, and expose a getImage() function that returns the image (which would be nil for the non software renderer).

Why not just use your own temporary image to render into, if you need the pixels? Even if I did what you suggested, that’s all that’d be happening internally, so you might as well just do it yourself explicitly.

Looking at GraphicsContext more closely, one could do this:

void paint (Graphics& g)
{
  LowLevelGraphicsSoftwareRenderer* const renderer = dynamic_cast <LowLevelGraphicsSoftwareRenderer*> (g.getInternalContext ());

  if (renderer != nullptr)
  {
    // get the image from renderer
  }
...

The problem is that LowLevelGraphicsSoftwareRenderer doesn’t expose the image. Can we add this function?

Image LowLevelGraphicsSoftwareRenderer::getImage ()
{
  return savedState->image;
}

?

Even if you had the image, you’d have no idea exactly which part of it your Graphics object is drawing into.

Seriously, just do this yourself with a temporary image, it’s not functionality that belongs in the Graphics class.

I’m doing that but it requires an extra drawImage(), and makes some compositing operations impossible.

For example, imagine component A on top of component B. In A::paint() I want to draw a gradient with 50% transparency and use the “screen” compositing operator. Currently this is not possible, because A::paint() can’t access the Image that contains the pixels that B drew. As you pointed out, for a CoreGraphics context you can’t get the pixels anyway.

However, it should be possible with just a few small non intrusive adjustments to the Graphics objects to allow this use case. First, by marking the ComponentPeer as “software-renderer-only”, and second by giving access to the underlying Image of the LowLevelGraphicsSoftwareRenderer (and the implicit promise that this interface won’t change in the future).

Right, and this could be done without changing Graphics.

Is this because of the origin being changed for each Component::paint()?

Well, looking at the classes involved, if we finish the changes that allow a custom renderer to be used for the ComponentPeer, another solution would be to subclass LowLevelGraphicsSoftwareRenderer to expose the necessary functions like getImage(), and some sort of getOrigin(). This would require no changes to JUCE other than what we talked about a while ago, specifically going through the LookAndFeel to create the low level context:

Old

new LowLevelGraphicsSoftwareRenderer (Image (this))

New

lookAndFeel->createLowLevelGraphicsContext (Image (this))

Sorry, this all just sounds like a hack to me.

Perhaps you are just dismissing it without reading it but what I’m saying is that if we finish part of the changes to allow custom renderers, this use-case can be accomodated without any “hacks” in JUCE. I have bumped the relevant posts.

Anything that involves providing more access to pixels is the wrong direction for me. In the future only GPU shader programs will care about pixels, and I want to be moving my APIs towards that aim, not adding access methods that I’ll just have to remove again one day.

Why not think laterally and suggest a feature that would allow you to do what you need without mentioning Images or pixels?

Forget everything I said about getting access to pixels. This should instead be thought of as “Can I customize the rendering pipeline to use my own subclass of LowLevelGraphicsSoftwareRenderer”.

I’m building a set of classes to replicate Photoshop “Layer Effects”. I do this by composing a separate Image and doing blend operations. You can see some of it here:

https://github.com/vinniefalco/LayerEffects/blob/master/VFLib/modules/vf_gui/graphics/vf_BlendPixels.h

and

https://github.com/vinniefalco/LayerEffects/blob/master/VFLib/modules/vf_gui/graphics/vf_LayerContext.h

The idea is that in response to paint() you construct a LayerContext, draw into it, and set some attributes. The attributes include everything you see in the Photoshop “Layer Effects” dialog. So, options for drop shadow, inner glow, outer glow, emboss, stroke, overlay, etc… including the master blend mode, opacity, and per-channel transparency lookup tables.

Clearly, for this sort of thing to work it will be necessary to have access to the pixels underneath to perform the custom compositing. Of course it will not be possible to implement this across all renderers. Although OpenGL provides some of the compositing operators, there will always be modes which are unsupported.

Therefore, I accept that the usage of my “LayerContext” object will only be possible when using either the JUCE software renderer (if you expose enough information), or using a custom renderer that I will provide (if you finish tweaking JUCE to support custom renderers).

I don’t think its possible to implement these sort of features without mentioning images or pixels.

However, as I pointed out earlier if we just finish the changes that we talked about in April (choosing a renderer at runtime instead of being locked to LowLevelGraphicsSoftwareRenderer for all ComponentPeer), this sort of thing will be possible without any “hacks” in JUCE.

A related issue is the idea of not supporting software rendering in the future. I believe this to be a mistake. JUCE should always offer software rendering, and the option to customize the renderer at run-time (as well as exposing the software renderer internals so they can be subclassed or re-used).

I am concerned that the future of JUCE is to lock applications into only “approved” models of programming. We’ve seen some of this happen in the IntroJucer (no way to turn off link time code generation, no way to generate debug symbols in an optimized build, etc…).

Just as with IntroJucer, it is nice to OPTIONALLY use CoreGraphics or OpenGL when available, and OPTIONALLY allow JUCE to simply use whatever low level graphics implementation is most appropriate. But forcing the use of these alternatives, and dropping the software renderer would be bad. Look at what happened with CoreGraphics…turning it off and using the software renderer boosted the speed of my application by quite a bit. Using FreeType and the software renderer for fonts improved the appearance of my small text. If we had instead decided that font hinting wasn’t important, and that we should always use the operating system to render text, these benefits would have never happened.

I believe we should not be so hasty to discard flexibility in favor of the system API flavor of the month. Sure, provide these implementations as an option (along with customizing the renderer) but lets not dump the girlfriend just because we see what seems to be a younger more attractive girl on the street.

In the vein of what Jules said, your use-case could be solved using OpenGL with GLSL shaders layered on top of each other.

OpenGL is pretty well always available, as are the GLSL shaders. They do work differently in the case of embedded systems though, but rewriting a shader sounds like an easier thing to do than writing a whole new bit to customize the pipeline.

Oh, and OpenGL isn’t a flavour of the month. More like the flavour of a few decades. :slight_smile:

Wouldn’t doing your effects the way you want be entirely CPU based anyways? If so, that would imply absolutely bottle-necking your CPU, where your GPU could be fully taken advantage of!

I don’t know much about GLSL but this is not platform independent. It is specific to OpenGL. Implementing the solution as you describe means being locked to OpenGL.

Actually, to get JUCE to support custom software renderers shouldn’t take more than a handful of lines of code for each platform. Jules already did most of the work. We just need that last step. I bumped the relevant posts.

Yes, all of the effects I propose are CPU based. In order for them to be platform-agnostic, they would have to be written in C++ and not OpenGL shader language.

My use-case for these effects is to reproduce the expressive power of Photoshop layers, for creating awesome looking buttons and controls that are drawn using vectors instead of being images hard-coded in the application.

Up until now developers are faced with a choice. A kick-ass looking user interface with lots of glows and interesting Photoshop artwork, but stored as images. Or, procedurally drawn controls that scale to any resolution but with limitations on effects: Juce only supports linear and radial transparent blends, and only one image compositing operator.

My implementation gives you the expressive power of Photoshop layers applied in real-time to vector based Graphics drawing. For example, applying an outer glow, colour overlay, and emboss to a Path object. Is it slow? Yes it is! You certainly can’t expect 30 frames per second. But for a button or slider, mouse rollover, pressed, and hilited effect it is more than sufficient. User interface controls are typically static. Draw them once, and they change appearance only infrequently. So the trade off is, amazing looking controls but at the speed of the CPU / software renderer. It is a good trade off!

This type of image effect will be possible using only a Path object and drawText, and appropriate options in a LayerEffectsContext object:

[attachment=0]17.jpg[/attachment]

The glow is produced using “Drop Shadow”, “Outer Glow”, and “Colour Overlay” layer effects:

My code will allow these type of effects to be applied to anything that can be drawn in a Graphics. Text, paths, lines, rectangles, etc… any shape!

Also, you can still get the performance improvements of OpenGL for your Component objects that need a high refresh rate, while getting access to these cool blending effects by simply pre-calculating a few Image objects for the control with the various effects applied, and going through a standard Graphics context instead of the software one.

Of course, it won’t be possible to use anything other than the normal image compositing operator for the bottom-most layer (the “Background” layer in Photoshop, which is the other components that you are drawing on top of) if you use OpenGL. But you can still stack custom operators on top of this base layer.

If you are satisfied with only having the “normal” composite operator for your Component, then you could still calculate the layer effects dynamically and apply them in an OpenGL context. There will be some limitations on what is possible.

Here’s a concrete example: A Component object which has setOpaque (false), and in paint() composites a colour image onto the background using the “luminance” operator. This could be the glow on a rollover or pressed state of a button. In order to apply the luminance operator, the pixels underneath the given component are required to perform the calculation.

I’m still skeptical, although very curious to see such in action. Maybe my game development oriented background is making me biased; the higher the fps for everything in general, the better! (Whilst having everything procedurally drawn.)

That’s great! But what happens when you rollover a bunch of these on some shitty laptop? Would hate to see some JUCE user’s DAW host-like system’s frame-rate drop suddenly because someone rolled over a few buttons that were intended to look damned pretty.

  1. For Components that need a high refresh, it is advised to have setOpaque (true) and use only the normal composite operator.

  2. If it is a requirement that a native OS API be used (CoreGraphics, OpenGL, DirectDraw) then the interface can be restricted to the normal composite operator. You could still have custom operators for stuff drawn on top of the background layer.

  3. If performance is an issue then the button images can be cached after they are generated.

On the other hand for controls that do need to be updated frequently this is a workable system.