Threadpool keeps getting stuck after first tasks finish

Can somebody tell me what I’m doing wrong?

I just bought a beast of a PC with 16 cores, and am thoroughly disappointed to find that ThreadPool is not working as it should. By default it sets itself up with 32 threads. I have 1175 files to be processed in total, so I create a bunch of jobs and send them to the pool. Sometimes I create 30 jobs, other times I try lower numbers such as 20 or 8, or even just 1 at a time. But I always get the same problem: there is a significant pause between the end of one job and the start of the next.

It will instantly finish the entire batch of jobs in less than a second, and then it will sit and wait several seconds before restarting one job. Then again it will wait several seconds to start one more. But this continuation only occurs if I add less than 30 jobs to the pool. If I add exactly 30, it just completely stops after the 30 jobs are finished.

I have tried using both commenting and break points to see what is going on. The jobs definitely make it all the way to the end without throwing any exceptions, and they return a status of “jobNeedsRunningAgain” when they finish. But after this point, the jobs strangely still claim to be running (if I use the isRunning() method) even though the runJob() method has clearly finished.

void IAlgorithm::run() {
	ThreadControl = new ThreadPool();
	Logger::outputDebugString("Created " + String(ThreadControl->getNumThreads()) + " threads");

	TotalThingsToDo = static_cast<int>(AllRecords.size());
	TotalThingsDone = 0;
	CurrentRecord = AllRecords.begin();

	// Add threads to pool
	for (auto i = AllThreadJobs.begin(); i != AllThreadJobs.end(); ++i) {
		ThreadControl->addJob(*i, false);

	// Wait for threads to finish
	while (ThreadControl->getNumJobs() > 0) {
		if (threadShouldExit()) {
			ThreadControl->removeAllJobs(true, 2000, nullptr);

File IAlgorithm::GetNextFile() {
	Logger::outputDebugString("Trying to get next file");
	std::lock_guard<std::recursive_mutex> e{ DataMutex };
	setProgress(TotalThingsDone / (double)TotalThingsToDo);

	if (CurrentRecord == AllRecords.end()) {
		Logger::outputDebugString("Got empty file");
		return File();
	} else {
		Logger::outputDebugString("Got next file");
		return CurrentRecord->second->GetFile();

JobStatus IAlgorithmJob::runJob() override {
	AudioFile = OwnerAlgorithm->GetNextFile();
	if (AudioFile.existsAsFile() && !shouldExit()) {
		Logger::outputDebugString("Processing " + AudioFile.getFileName());
		return JobStatus::jobNeedsRunningAgain;
	return JobStatus::jobHasFinished;

There’s not really any point in using jobs like that - what you’ve got there could more easily be done by just creating a number of threads that all ask for the next file and process it.

The way the jobs are intended to be used is that e.g. if you have a big list of files, you’d create a job object per file. The pool would then work its way through the list and you’d measure progress by the number of remaining jobs in the pool.

OK thanks, I’ll give that some thought. My biggest concern though is that the class that operates on a specific file requires several chunks of memory that are not particularly huge, but add up. If I create 1000 of these objects, I might use a lot of resources.

Incidentally, I’ve been tinkering with the code, and decided to try putting each thread into a loop that asks for a new file on each iteration instead of finishing the thread and waiting to be restarted. Additionally, I tried commenting out a bunch of lines to see if that made a difference. Now everything is running super fast. So I’ve just gotta use the process of elimination to figure out what the troublesome code was. My best guess is that it is a concurrency issue of some kind. Perhaps my database isn’t configured properly for multi-threading or something.

Sure, but any resources that a job needs to run can be allocated locally on the stack in its runJob() method, you don’t need them to be member variables.

OK. Thanks. I previously used to use C# and had massive issues there with memory management. I suppose I must have brought that mentality with me to C++ and have still been trying to avoid the nightmares that come with the garbage collector.