ListenerList Listeners Getting Deleted

I’m working on a project using Foleys Video Engine by @daniel which is totally awesome.
I’ve run into an issue when running two video clips at the same time.
Inside the VideoEditor demo in the AVClip class there is a ListenerList where the listeners are getting deleted after about half a second of the clips playing successfully.

Inside the ListenerList file it warns about this and says to trying calling callChecked() instead of call(). I built my BailOutChecker class with a shouldBailOut() but every time I run it I get Error C2228 left of ‘.shouldBailOut’ must have class/struct/union. But I’m calling it from a class???

Anyways thank you @daniel for the awesome video engine. I apologize how annoying this question is. I’ve been debugging for the last 18 hours and am not thinking clearly.

Any ideas or pointers from anyone would be super helpful.


Hi Dano,
Thanks for trying out the video engine. I must admit I don’t understand the scenario yet.

You mean the engine deletes the listeners or you are trying to delete the listeners?

The error in your BailOutChecker sounds to me like the checker itself went out of scope. I didn’t use the BailOutChecker myself, I’ve seen it in the docs, but didn’t need it myself.

BTW. I worked the last weeks a lot in the develop branch, I think that works much more smooth. Maybe I should make it into a release to the master branch at some point.

Let me know if the problem persists in the develop branch too.


1 Like

Thanks for the reply! I will check out the develop branch and report back.

Awesome! Thank you! Just switching to the develop branch solved all my problems. The updates are spectacular!
I’m currently in the process of porting some of this over to work on Android and IOS. I’ll let you know what I come up with.


Alright! The video engine is working on Android! Getting FFMPEG to work on Android is always a bit of a hair pull but it’s working. I could make a video walking through all the steps if that would be beneficial to you. Only two problems now:

  1. The color of the video is weird. It looks like it’s missing the red content. It’s consistently this way though so it shouldn’t be too hard to fix.

  2. The video playback is a bit choppy. It’s this way for me on Windows and Mac most of the time as well.

Any pointers you can give me on fixing these two issues would be spectacular.


I didn’t test, but I think adding the proper internal pixel format for each platform.
Let me know if that fixes the colours on android:

For the playback, unfortunately the drawing is happening on the message thread. That means if the message thread is busy, the drawing of the video suffers as well.

I am constantly trying to improve performance. If you spot some bottleneck let me know.

A video would be great! I want to invite you to post it on as well :slight_smile:

1 Like

Awesome! I’ll give that a shot and report back!

Yes sir, you got it! Once I get these small hiccups cleaned up I’ll put a nice little post together :slight_smile:

Thank you! I’m not quite sure what you wanted me to do with the juceInternalFormat variable. Maybe I’m just being a total noob. To fix the colors on android I just changed the output format of the (FFmpegVideoScaler) scaler.setupScaler (in foleys_ffmpegreader) from “AV_PIX_FMT_BGR0” to “AV_PIX_FMT_RGB0”. That seemed to do the trick. So that problem is solved :+1:

To get buttery smooth playback do you think video compression would help? I wasn’t able to find any compression related classes within the library but I just want to make sure I’m not missing it.
Also, if there are any classes within the library you could point me to that can lower the resolution of the video that would be awesome!


If you pull the latest version you get the same fix.

git pull origin develop

Instead of changing the variable in the code I added a const static variable, that is initialised depending on the platform you are building on.
Your change fixed it for android but broke it for any other platform.

The video frames are read into a foleys::VideoFifo. You can set a resolution there, but I am not sure if I respect that at that point. That would be a feature request.

This would be the point where you could scale down the image already when reading:

Good luck

1 Like

Ah, Got it! Thank you!

I apologize if this is a stupid question but if I may; is there a reason the playback drawing has to happen on the message thread?

I wish there wasn’t. But the JUCE drawing system (just like many others) allow drawing only from the message thread.

If there was an alternative I would be eager to learn.

There is an experimental setting that will render on a bespoke OpenGLRenderer instead. Here the rendering happens on a background thread, but it is happening in a frame buffer that will have then to be swapped on the message thread.
My OpenGL drawing code is “a bit” buggy though :wink:

This is the setting:

1 Like

Awesome! Thank you! I’ll do some extensive experimentation and research and report back :slight_smile:

Hey! One interesting thing I found. When I enable openGL in the projucer and set FOLEYS_USE_OPENGL to 1 the playback is smooth but the scaling is funny. But then, if I uncheck the enable openGL options in the view menu the scaling is perfect AND the playback is still smooth.
Just throwing that out there :slight_smile:


Alright! I reworked a bit of the opengl drawing code and videos now playback perfectly and super smooth. I can play about 3 720p videos on an old Android phone at the same time and all is well.
For my application I need to be able to play about 16 videos at the same time. To do this I assume I’ll need to render each clip on its own OpenGLRenderer. I’m trying to figure out the best way to go about this. As it is right now the OpenGLRenderer is attached to the single OpenGLContext (which is attached to the single OpenGLView) which is rendering all of the videos. Would it be the correct approach to try and attach an OpenGLContext to each AVClip? Or would it be better to try and split it off to other threads somewhere down the line closer to the actual draw call?


You can further optimize OpenGL rendering by converting YUV to RGB conversion on fragment shader - YUV420P is usually output from FFmpeg video decoder.

Here is an example: Decoding H264 and YUV420P playback

1 Like

That is a nice idea. Thanks!

I wouldn’t create multiple SoftwareView components. Instead I would create a foleys::CompositeClip and add the individual clips there.
In my additions module I sue this to test multiple cameras streaming at once. I set up the scene like this:

    addAndMakeVisible (view);

    editClip = std::make_shared<foleys::ComposedClip>(videoEngine);
    view.setClip (editClip);
    transport.setSource (editClip.get());
    transport.addChangeListener (this);

    int numCameras = cameraManager.getCameraNames().size();

    if (numCameras > 0) addCamera (0, 49.0, -0.25, -0.25);
    if (numCameras > 1) addCamera (1, 49.0,  0.25, -0.25);
    if (numCameras > 2) addCamera (2, 49.0,  0.25,  0.25);
    if (numCameras > 3) addCamera (3, 49.0, -0.25,  0.25);

    setAudioChannels (0, 2);


    setSize (600, 400);

    transport.removeChangeListener (this);
    transport.setSource (nullptr);



void MainComponent::addCamera (int index, float zoom, float posX, float posY)
    auto cameraClip = cameraManager.createCameraClip (index);
    foleys::ComposedClip::ClipPosition position;
    position.length = 3600.0;
    auto descriptor = editClip->addClip (cameraClip, position);

    descriptor->getVideoParameterController().getParameters()["zoom"]->setRealValue (zoom);
    descriptor->getVideoParameterController().getParameters()["translateX"]->setRealValue (posX);
    descriptor->getVideoParameterController().getParameters()["translateY"]->setRealValue (posY);

Ignore the CameraManager and just use your MovieClips or whatever you want to add…

Also worth mentioning that SoftwareView will also attach an OpenGLContext to the view component. The difference is that it will call

void render (juce::GraphicsContext& g, ...)

instead of

void render (juce::OpenGLContext& g, ...);

The placement in the experimental OpenGLContext version does not work properly, some axis is wrong. I didn’t manage to fix that yet, since my OpenGL knowledge is rusty.

I think creating multiple OpenGLContexts might be problematic especially on mobile devices.

1 Like

Thanks guys! I’ll give this a whirl :slight_smile:

Hey! Thanks for the reply! I’m trying to implement this but there seems to be nothing built in for dealing with the YUV image format in Juce. For example my current setup decodes and converts straight into the Juce::Image and Juce::OpenGLTexture classes. Unfortunately these classes can really only deal with RGB and RGBA. Is there anything you know of to help deal with YUV?
I just want to make sure there’s no better way to do this before I start doing it all manually.

Also, just curious how much more performance you think I could get doing this?


Yes, you will need to patch Foleys Video Engine as it doesn’t provide access to decoded video AVFrame.

Sorry I don’t have any examples.

1 Like