OpenGL auto-detection


#1

I’m calling OpenGLContext::attachTo which is giving a blank component for some Windows users (presumably with old drivers/computers).

What’s the best way to detect if OpenGL renderer is going to work? Is there one?


#2

just going to copy and paste my initialize function from PrettyScope, peruse it and see if you can cherry pick some useful things. Half way down there’s some code about getting GL version. BTW Jim, y u not ask me on skype! :open_mouth:

void MainContentComponent::initialise()
{
	ScopedLock lock(openGLCriticalSection);

	typedef juce::String String;

	const String s_header("---------------------------------------------------------\n");

	// Log various info for forensic purposes
	{
		using namespace jura;
		clearLogFile();
		WriteLog(s_header);
		WriteLog("PrettyScope "+juce::String(JucePlugin_VersionString)+" (built on " __DATE__ " at " __TIME__ ")\n");
		WriteLog(s_header);

		const juce::Time currentTime = juce::Time::getCurrentTime();

		WriteLog("\n\tInitialization started at " + currentTime.toString(true, true) + "\n");

		typedef juce::SystemStats SysStats;

		//WriteLog("\tCPU: " + SysStats::getCpuModel() + "\n"); // JUCE 5
		WriteLog("\tCPU: " + SysStats::getCpuVendor()
			+ " @ " + String(1e-3f * SysStats::getCpuSpeedInMegaherz()) + "GHz - " + String(SysStats::getNumCpus()) + " cores, "
			+ String(1e-3f * SysStats::getMemorySizeInMegabytes()) + "Gb RAM\n");

		WriteLog("\tOS:  " + SysStats::getOperatingSystemName()
			+ " " + (SysStats::isOperatingSystem64Bit() ? "64" : "32") + "bit\n");

#if JUCE_DEBUG
		WriteLog("GL context: " + String(OpenGLHelpers::isContextActive() ? "current" : "not current") + "\n");

		//openGLContext
#endif // JUCE_DEBUG
	}

	// {Lorcan} BUG we sometimes get crashes here, probably due to something performing GL commands before context was initialized
#if GL_LOADER_GLAD
	if (!gladLoadGL())
	{
		WriteLog("ERROR: GL loader failed to initialize!\n");
		failedToStart = true;
		return;
	}
#else
	else if (glewInit() != GLEW_OK)
	{
		WriteLog("ERROR: GLEW failed to initialize!\n");
		failedToStart = true;
		return;
	}
#endif // GL_LOADER_GLAD

	const char* version = reinterpret_cast<const char*>(glGetString(GL_VERSION));

	if ((nullptr != version) && (version[0] < '3'))
	{
		const char* versionx = (nullptr != version) ? version : "null";
		WriteLog("ERROR: OpenGL versions under 3.0 are not supported! (OpenGL version " + String(versionx) + " detected.)\n");
		failedToStart = true;
		return;
	}

	GLint majorVersion = 0, minorVersion = 0;
	glGetIntegerv(GL_MAJOR_VERSION, &majorVersion); WriteDebugLog(200);
	glGetIntegerv(GL_MINOR_VERSION, &minorVersion); WriteDebugLog(201);
	openGLVersion = majorVersion + 0.1f * minorVersion; WriteDebugLog(300);
	//WriteLog("OpenGL version " + String(openGLVersion) + " detected\n"); WriteDebugLog(202);
	{
		WriteLog(s_header);
		WriteLog("OPENGL adapter details\n");
		WriteLog(s_header);

		const char* sGLSL = reinterpret_cast<const char*>(glGetString(GL_SHADING_LANGUAGE_VERSION));
		const char* sVendor = reinterpret_cast<const char*>(glGetString(GL_VENDOR));
		const char* sRenderer = reinterpret_cast<const char*>(glGetString(GL_RENDERER));

		WriteLog("\tVersion:      " + juce::String(version) + "\n");
		WriteLog("\tGLSL:         " + juce::String(sGLSL ? sGLSL : "?") + "\n");
		WriteLog("\tVendor:       " + juce::String(sVendor ? sVendor : "?") + "\n");
		WriteLog("\tRenderer:     " + juce::String(sRenderer ? sRenderer : "?") + "\n\n");

		GLint lineWidths[2] = { -1, -1 };
		GLint pointWidths[2] = { -1, -1 };

		glGetIntegerv(GL_ALIASED_LINE_WIDTH_RANGE, lineWidths);
		glGetIntegerv(GL_ALIASED_LINE_WIDTH_RANGE, pointWidths);

		WriteLog("\tLine range:   " + juce::String(lineWidths[0]) + " .." + juce::String(lineWidths[1]) + "\n");
		WriteLog("\tPoint range:  " + juce::String(pointWidths[0]) + " .." + juce::String(pointWidths[1]) + "\n");
	}
	//glDebugMessageCallback(&debugCallback, nullptr);

	openGLContext.setContinuousRepainting(true); WriteDebugLog(203);
	openGLContext.setSwapInterval(openglOscilloscope.framerate); WriteDebugLog(204);
	openglOscilloscope.startup(); WriteDebugLog(205);

	basicShader = new OpenGLShaderProgram(openGLContext); WriteDebugLog(206);
	basicShader->addVertexShader(translateVertexShaderToV3(basicVertShaderString)); WriteDebugLog(207);
	basicShader->addFragmentShader(translateFragmentShaderToV3(textureFragShaderString)); WriteDebugLog(208);
	basicShader->link(); WriteDebugLog(209);
	combineShader = new OpenGLShaderProgram(openGLContext); WriteDebugLog(210);
	combineShader->addVertexShader(translateVertexShaderToV3(basicVertShaderString)); WriteDebugLog(210);
	combineShader->addFragmentShader(translateFragmentShaderToV3(combineFragShaderString)); WriteDebugLog(211);
	combineShader->link(); WriteDebugLog(212);
	colormapShader = new OpenGLShaderProgram(openGLContext); WriteDebugLog(213);
	colormapShader->addVertexShader(translateVertexShaderToV3(basicVertShaderString)); WriteDebugLog(214);
	colormapShader->addFragmentShader(translateFragmentShaderToV3(colormapFragShaderString)); WriteDebugLog(215);
	colormapShader->link(); WriteDebugLog(216);

	//glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // debug

	float vertices[] = // position (x, y), texture coordinates(u, v)
	{
		-1, -1,  0,  0,
		-1,  1,  0,  1,
		 1, -1,  1,  0,
		 1,  1,  1,  1,
	};
	glGenBuffers(1, &fullScreenQuadBufferID); WriteDebugLog(217);
	glBindBuffer(GL_ARRAY_BUFFER, fullScreenQuadBufferID); WriteDebugLog(218);
	glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); WriteDebugLog(219);
	glBindBuffer(GL_ARRAY_BUFFER, 0); WriteDebugLog(210);

	glGenFramebuffers(1, &frameBufferID); WriteDebugLog(211);
	glGenTextures(TEXTURE_COUNT, textureIDs); glErrorCheck(); WriteDebugLog(212);
	updateTextureDimensions(); WriteDebugLog(213);
}

#3

Cheers Elan - I guess I’m asking:

  • What are the minimum OpenGL requirements for JUCE’s OpenGLContext::attachTo(…) to work
  • How to detect this before enabling it.

We get a massive performance boost when we enable it so I’m keen to have it on where possible. It’s mainly a problem on WIndows it seems - Mac’s having rather more consistent hardware and drivers.


#4

OpenGL 2 on desktop, OpenGL ES2.0 on mobile. If you want to override the defaults, you can call:

    context.setOpenGLVersionRequired (OpenGLContext::openGL3_2);

On failure, JUCE will backtrack to an earlier version of OpenGL.

See https://www.khronos.org/opengl/wiki/Get_Context_Info


#5

This is roughly what Elan was saying.

But don’t i have to have a context already to do this. So I’m guessing I create a context, call the version information call, check the version and then open my main window somehow… arrrgh :slight_smile: I’ve temporarily made a version with a command-line disable open GL parameter, but this isn’t a long term solution. Will investigate.


#6

Hey @jimc did you ever find a solution to detecting OpenGL prior to enabling it?

Using the line that @Elan_Hickler had recommended:

const char* version = reinterpret_cast<const char*>(glGetString(GL_VERSION));

seems to always return NULL for me, even when this is not true.

Any thoughts would be helpful. Thanks. :slight_smile:

EDIT: this data appears to be available outside of my editor constructor once the openGLContext has been attached.

EDIT #2: So back to the original question, how does one determine if OpenGL exists on the system, without first attaching a openGLContext? It doesn’t seem possible at the moment…


#7

Yeah., looks like you need a context. You don’t have to attach it using JUCE though. Presumably the example here is the shortest possible implementation https://stackoverflow.com/questions/12184506/why-does-glgetstringgl-version-return-null-zero-instead-of-the-opengl-versio

I think there are other calls that return version numbers rather than a string too…?


#8

Cool, thanks for the reply. I’ll try a few things today. Thanks for the insight. :slight_smile: