Qualcomm Augmented Reality - Getting Started

Alex Egg,

Target Recognition

The QCAR libraries can detect a rectangle target and then supply your calling code w/ a respective model view matrix. This in turn allows your app code to render any arbitrary graphics positioned relative to this target. QCAR gives you position info relative to a 2D target regardless of orientation. See example video below.

From the video you can see a teapot floating on the paper surface. The wood chip pattern on the paper is the target that QCAR detects. It detects and then passes to the application code the orientation of the target (paper), so that the app code can know how to manipulate the 3d model teapot. This is the value of QCAR - it gives you orientation of a target relative to camera - this is called the model view matrix.

Run Qualcomm sample code

This assumes you have Android Eclipse/SDK/NDK environment all set up.

Download and install the Qualcomm Augmented Reality (QCAR) SDK here: https://ar.qualcomm.com/qdevnet/sdk

We will now build and install the ImageTargets sample project.

First we must build the C++ source. Change to the QCAR SDK directory and go to samples/ImageTargets/jni - in this directory run ndk-build and you will see something like this:

$ ndk-build 

Install        : libImageTargets.so => libs/armeabi/libImageTargets.so
Install        : libQCAR.so => libs/armeabi/libQCAR.so
Install        : libImageTargets.so => libs/armeabi-v7a/libImageTargets.so
Install        : libQCAR.so => libs/armeabi-v7a/libQCAR.so

Now we must build the android source. This can be done in eclipse.

Create a new exclipse project and choose “from existing source” and choose the ImageTargets directory. Now simple build this project in eclipse. You should be able to test it w/ your phone, see example video:

Now to experiment w/ the SDK let’s change the target…

NOTES:

Here are a few notes I’ve collected to help understand the sample code from Qualcomm (it’s mostly opengl lingo).

The modelview matrix is used for positioning the target in relation to the camera. It is not an array of vertices for the corners of the target, which is what you want.

Using that modelview matrix, you can assume that the center of the target is at the world origin (0, 0, 0). So you just need to know the X/Y offset of the four corners. For this, use the half width and height of the target. You can find this with the ImageTarget::getSize method (you’ll need to cast your Trackable to an ImageTarget first).

Take a look at the vbVertices array in the VirtualButtons sample to understand how to set up the corner vertices for line rendering.

The tracker returns a pose matrix. The pose matrix defines the orientation of the target in relation to the camera (viewer). We use this as the starting point for our modelview matrix, and apply transforms such as translations, rotations, and scales on top of it. So now the modelview matrix represents the final position and orientation of the object we want to render in the world, assuming the camera is at the origin of our coordinate system.

The projection matrix defines the view frustum, which basically takes a 3D volume and maps it to our 2D screen. When we multiply the modelview and projection matrices together we have all the information we need to position our 3D object and map it to the screen in one handy matrix. See the OpenGL transform documentation for more details (section 9.011 in particular): http://www.opengl.org/resources/faq/…formations.htm

The vertex shader multiplies each vertex in our model by the modelviewProjection matrix. The rendering pipeline then takes each transformed triangle in our model (assuming we’re rendering triangles) and determines which screen pixels to paint. The fragment shader determines the color. These two shaders together make a shader program. Open CubeShaders.h in the samples to see what these shaders look like.

glVertexAttribPointer() assigns an array of model data to a variable in our vertex shader. The address of this variable was discovered using the glGetAttribLocation() method.

glEnableVertexAttribArray() simply enables this array, ensuring it will be sent to the shader when glDrawElements() is called.

glUniformMatrix4fv() sends our modelviewProjection matrix to the vertex shader. All of these functions are just ways of mapping data we have in C++ to variables in the shader program.

The appeal of Qualcomm Augmented Reality libraries is two fold: The advanced target recognition and the model view matrix it can supply your program.

Permalink: qcar-getting-started

Tags:

Last edited by Alex Egg, 2016-10-05 19:11:47
View Revision History