ListenerList Listeners Getting Deleted

All good just making sure! Thanks!

Alright! I’ve successfully added the feature to skip the cpu conversion and convert the YUV420P in the fragment shader as @perob suggested. I can now easily turn hardwareAcceleration on and off for CPU usage comparison. The only issue I’m having now I believe is in VideoFifo::findFramePosition.
The video looks super schizophrenic for lack a of better word.
Each individual frame looks perfect it’s just not playing them in the right order.
It looks like it’s constantly showing a frame late then a frame early etc.
For example it should play frame: 1, 2, 3, 4, 5, 6, 7, 8, 9 etc.
Instead it’s playing frame: 1, 4, 2, 3, 6, 8, 5, 7, 9 etc.
I’m currently debugging it but it’s a little tricky of a thing to debug because every time I hit a breakpoint it’s stops the video and and doesn’t ask for the next frame but the current one.
One thing I noticed is if I just return -1 after the first condition fails in VideoFifo::findFramePosition it actually plays a lot smoother but still a little funky.
Sorry for such an obnoxious question just wondering if there’s anything you can point me to.

Thanks guys!

From your description timestamp calculations have been messed up. Look at video decoding and video fifo methods for clues.

I haven’t actually tested but timestamp calculations in the engine don’t use av_rescale_q() function which should be used.

The VideoFifo is actually an annoying construct. I wanted to allow scrubbing later, that’s why findFrame has this forward and reverse search modes. I think I tried to be too clever here. Could be that in your case it screws up, it did for me in the past as well.

The FOLEYS_DEBUG_LOG macro might be of help (Projucer module options).

Alright! Turned out I was just making a total noob mistake when copying the frames to my buffer. I thought I was making a deep copy with memcpy but it turns out I was wrong. *facepalm

Anyways! This thing is totally cranking with the hardware acceleration. I have the ffmpegReader decoding straight into my AVFrame buffer which then goes into my openGL drawing code and is converted on the fragment shader. Memory usage is about half what it was per video and performance is across the board a bit better. If you’re running on a real computer this may not be a big deal but in the mobile world it makes quite a difference. Thanks so much @daniel and @perob for all your suggestions! If you’d like me to share any of this code just let me know :slight_smile:

1 Like