Mirroring Video with openFrameworks
Posted on 2010-01-19 by Jan Vantomme
Tags:
openframeworks, tutorial
When you create an installation that uses a webcam to analyse the behavior of people, you often need to mirror the video to use it. In this article I'm going to explain how to do this in openFrameworks.
The first thing you need to do is to declare some variables in testApp.h
. You'll need an ofVideoGrabber to capture video from the webcam, an ofTexture to render the mirrored video to the screen, a char array to store temporary pixel values and two integers for the width and height of the video to capture. Add this code to testApp.h
, right after the standard methods.
ofVideoGrabber vidGrabber;
ofTexture mirrorTexture;
unsigned char * videoMirror;
int camWidth;
int camHeight;
Next up is setting the values of these variables in the setup()
method of testApp.cpp
. Set the width and height of the video to capture first, set up the grabber, create a new unsigned char for the temporary pixel values and allocate a texture for the mirrored video. Note that the lenght of the temporary pixel array is camWidth x camHeight x 3. Each pixel takes up 3 places in the array. One for each RGB component. Add the code below to setup()
.
camWidth = 320;
camHeight = 240;
vidGrabber.setVerbose(true);
vidGrabber.initGrabber(camWidth, camHeight);
videoMirror = new unsigned char[camWidth*camHeight*3];
mirrorTexture.allocate(camWidth, camHeight, GL_RGB);
Once the variables are all set up, you'll need the algorithm to swap pixels. This is a bit tricky because you need to swap them in blocks of three values. You can't just reverse the pixels as the R, G and B components need to stay in place. If you don't do this, the colors of the mirrored video will look weird.
This is the code to add to the update()
method.
ofBackground(0, 0, 0);
vidGrabber.grabFrame();
if (vidGrabber.isFrameNew()) {
unsigned char * pixels = vidGrabber.getPixels();
for (int i = 0; i < camHeight; i++) {
for (int j = 0; j < camWidth*3; j+=3) {
// pixel number
int pix1 = (i*camWidth*3) + j;
int pix2 = (i*camWidth*3) + (j+1);
int pix3 = (i*camWidth*3) + (j+2);
// mirror pixel number
int mir1 = (i*camWidth*3)+1 * (camWidth*3 - j-3);
int mir2 = (i*camWidth*3)+1 * (camWidth*3 - j-2);
int mir3 = (i*camWidth*3)+1 * (camWidth*3 - j-1);
// swap pixels
videoMirror[pix1] = pixels[mir1];
videoMirror[pix2] = pixels[mir2];
videoMirror[pix3] = pixels[mir3];
}
}
mirrorTexture.loadData(videoMirror, camWidth, camHeight, GL_RGB);
}
Drawing the video to the screen is easy. You do need to set the background to solid white before drawing, otherwise the video will be tinted in the color that was last set. Add this code to testApp.cpp
in draw()
ofSetColor(255, 255, 255);
vidGrabber.draw(0, 0);
mirrorTexture.draw(camWidth, 0, camWidth, camHeight);
Now build that app and it should look somewhat like the picture below.
So now you know how to mirror an RGB video. This is useful for tracking colors. Next article will be about mirroring grayscale video, which is a lot faster and good if you only need to track motion.