OpenGL Path Rendering Performance/Smoothness

Hey all,

I have an oscilloscope component whose parent provides an OpenGLContext and OpenGLRenderer with setComponentPaintEnabled(true), and this oscilloscope draws its line with a simple g.strokePath(p, ...).

I’m not well-versed in what happens beneath that strokePath call, so I have a few questions along the lines of the tradeoff between smooth line drawing and fast rendering.

  1. Does the path rendering here happen on the GPU? Or does the CPU compute the set of vertices and pass that off to the GPU for rendering? (Realizing as I type that I can probably find the answer by profiling. I will report back with my findings)
  2. Is there any benefit to providing more than one Y value per pixel? E.g. p.lineTo(0.25f, getOutputPoint(0.25f)); Obviously I don’t have quarter-pixels to draw here, but I could imagine, for example, that adding extra Y values perhaps helps inform the anti-aliasing in the path rendering algorithm? Or perhaps those extra Y values come into play for better scaling to HiDPI displays with the paint call’s AffineTransform under the hood? But obviously more points = more cost, so I’m trying to make sure I understand both sides of the tradeoff here to make a well-informed decision about how I draw this.

I think that’s all, thank you!

The path is converted to a series of scanlines using the EdgeTable, meaning calculated using the CPU, then this set of scanlines are pasted onto an image/set of pixel data which is then handed to the GL context as a texture.

Thanks @jrlanglois! That makes sense. That also makes me think that adding only as many points as is necessary to the Path is important, so as not to do unnecessary work on the CPU for every frame.

I’ll see if I can dig into the source to find an answer to question (2) up there.