Mirroring Video with openFrameworks

Posted by Jan Vantomme on 19 January 2010.
Tags: , , ,

When you create an installation that uses a webcam to analyse the behavior of people, you often need to mirror the video to use it. In this article I’m going to explain how to do this in openFrameworks.

The first thing you need to do is to declare some variables in testApp.h. You’ll need an ofVideoGrabber to capture video from the webcam, an ofTexture to render the mirrored video to the screen, a char array to store temporary pixel values and two integers for the width and height of the video to capture. Add this code to testApp.h, right after the standard methods.

ofVideoGrabber vidGrabber;
ofTexture mirrorTexture;
unsigned char videoMirror;
int camWidth;
int camHeight

Next up is setting the values of these variables in the setup() method of testApp.cpp. Set the width and height of the video to capture first, set up the grabber, create a new unsigned char for the temporary pixel values and allocate a texture for the mirrored video. Note that the lenght of the temporary pixel array is camWidth x camHeight x 3. Each pixel takes up 3 places in the array. One for each RGB component. Add the code below to setup().

camWidth 320;
camHeight 240;
videoMirror = new unsigned char[camWidth*camHeight*3];

Once the variables are all set up, you’ll need the algorithm to swap pixels. This is a bit tricky because you need to swap them in blocks of three values. You can’t just reverse the pixels as the R, G and B components need to stay in place. If you don’t do this, the colors of the mirrored video will look weird. This is the code to add to the update() method.

if (
vidGrabber.isFrameNew()) {
    unsigned char 
pixels vidGrabber.getPixels();
    for (
int i 0camHeighti++) {
for (int j 0camWidth*3j+=3{
// pixel number
int pix1 = (i*camWidth*3) + j;
int pix2 = (i*camWidth*3) + (j+1);
int pix3 = (i*camWidth*3) + (j+2);
// mirror pixel number
int mir1 = (i*camWidth*3)+* (camWidth*j-3);
int mir2 = (i*camWidth*3)+* (camWidth*j-2);
int mir3 = (i*camWidth*3)+* (camWidth*j-1);
// swap pixels
videoMirror[pix1] pixels[mir1];
videoMirror[pix2] pixels[mir2];
videoMirror[pix3] pixels[mir3];    

Drawing the video to the screen is easy. You do need to set the background to solid white before drawing, otherwise the video will be tinted in the color that was last set. Add this code to testApp.cpp in draw()


Now build that app and it should look somewhat like the picture below.

A screengrab of the mirrored video project.

So now you know how to mirror an RGB video. This is useful for tracking colors. Next article will be about mirroring grayscale video, which is a lot faster and good if you only need to track motion.


Tweet this article

Oldskool Comments (7)

Gravatar for Theodore Watson

From: Theodore Watson
Date: 19.01.2010

Nice tutorial! Another easier way to do it is to use the ofxOpenCv addon. Both the ofxCvColorImage and ofxCvGrayscaleImage have a mirror(bool flipHorizontal, bool flipVertical) method, so you can stick the pixels in a cvimage flip them and get them back. Its nice to know how to do it by hand though- I made my students learn it by hand before I told them about the mirror function :) Hardest yet is a 90/270 Degree rotate – especially with an RGB array.

Top · Permanent link to this comment

Gravatar for Jan Vantomme

From: Jan Vantomme
Date: 19.01.2010

Didn’t know about the mirror function in ofxOpenCv. Wrote this on the train and didn’t have any documentation with me. Was fun to do. Wouldn’t ask my students to write algorithms like this. Most of them think it’s hard enough to draw simple shapes with code.

Top · Permanent link to this comment

Gravatar for Bart

From: Bart
Date: 16.02.2010

This works as well: glPushMatrix(); ofTranslate(camWidth,camHeight); glScalef(-1, -1,1); vidGrabber.draw(0,0); glPopMatrix(); You can also rotate it by exchanging the glScalef with an ofRotate. Problem is that they both work with a pivot situated at (0,0)

Top · Permanent link to this comment

Gravatar for xenomuta

From: xenomuta
Date: 03.03.2010

Both of your examples have enlightened me anyways, but I was getting this done with better performance by drawing images with negative width: image.setFromPixels(vidGrabber.getPixels(), camWidth, camHeight); image.draw(xPos + camWidth, yPos, -camWidth, camHeight);

Top · Permanent link to this comment

Gravatar for Jan Vantomme

From: Jan Vantomme
Date: 03.03.2010

I wouldn’t recommend using the algorithm in this article if you only use it to display images. I wrote it just to show how the pixel arrays in openFrameworks work. Algorithms like this can be used to track objects without using OpenCV or as a base for Augmented Reality applications.

Top · Permanent link to this comment

Gravatar for Chris Hodapp

From: Chris Hodapp
Date: 12.06.2011

Thank you for this page! I'm new to OpenFrameworks and was trying to complete the simple task of writing some pixels to a texture, then displaying that texture on the screen. Your example pointed out that I just needed to precede my texture.draw(...) call with ofSetColor(255,255,255).

Top · Permanent link to this comment

Gravatar for Keith

From: Keith
Date: 06.07.2011

Very helpful post (and comments), thanks!

Top · Permanent link to this comment