Saturday, August 30, 2014

Getting started

This is the introductory step in the tutorial Collecting 3D shape data using StereoMorph

1. If you do not already have R installed on your computer, begin by installing R. R can be installed on Windows, Linux and Mac OS X.

2. Open R.

3. In R, go to Packages & Data > Package Installer.


4. Find the StereoMorph package binary by typing “StereoMorph” into the Package Search box and clicking Get List.


5. Check the box next to Install Dependencies. This ensures that all the packages that StereoMorph requires to run will be installed as well. Then click Install Selected to install StereoMorph.


6. Download and unzip the StereoMorph Tutorial folder from StereoMorph Tutorial files (~63 MB). This folder contains all the files needed to perform the steps in this tutorial.

StereoMorph Tutorial files
7. Change the working directory in R to the StereoMorph Tutorial folder so that we easily access the files in R. Go to Misc > Change Working Directory....


8. Locate and select the unzipped StereoMorph Tutorial folder and click Open.


9. Load the StereoMorph package into the current R session using the library command.

> library(StereoMorph)

For the rest of this tutorial, text in Courier preceded by a “>” in the style above will be used to indicate commands to be entered into the R console. All of the R commands executed in this tutorial can be found in the ‘Tutorial code.R’ file in the Tutorial folder.

You are now ready to run all of the steps in the tutorial!

Go to the first tutorial step: Creating a checkerboard pattern

Saturday, August 23, 2014

Arranging the Cameras

This is step 4 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section covers key aspects on how to arrange cameras in a stereo setup.

Materials needed for this section:
  • 2 cameras (preferably DSLR cameras with minimal distortion lenses)
  • 2 camera remotes
  • 2 sturdy tripods
  • Masking or colored tape

General considerations:

Whether you’re collecting landmarks or curves, the number of cameras you include, the lenses that you use and how you position the cameras depends on what you want to collect landmark and curve data from.

Here are some general principles:
  1. The views among the cameras must overlap. Since 3D reconstruction requires the pixel coordinates of a point in at least two cameras, the point must be visible in at least two camera views.
  2. Theoretically, there is a trade-off between the ease of digitizing and reconstruction accuracy. For instance, if the angle between two camera views in a stereo setup is 90 degrees,
    Two-camera stereo setup with the cameras at 90 degrees relative to one another.
    you will have high reconstruction accuracy (together, the two views give you full information on a point’s position along all three axes) however the views will be so divergent that it will be difficult to identify the same point in both views. A point visible in one view may not even be visible in the other. If the angle between two cameras is reduced to around 20 degrees
    Two-camera stereo setup with the cameras at about 20 degrees relative to one another. I've found this works great for landmarks but not well for curves.
    it’s much easier to find the same point in both views (the views are nearly the same), however these slight differences in position are now the only information available on the point’s position along the depth axis (orthogonal to the image planes). In practice, I’ve found that cameras positioned with a small angle relative to one another still provide high reconstruction accuracy for landmarks but don't work well for curves. It’s best to start with the cameras as close together as possible (more convergent views), test the accuracy and make the views more divergent if the accuracy is worse than what you’re willing to accept.
  3. The volume of space visible in both cameras should be large enough to contain the object(s) and landmarks or curves you’ll be digitizing. You might have to flip the object around a couple of times (to get the opposite side, for instance). If not all of the object is visible, you’ll have to digitize sections of the object separately and then assemble the point sets based on overlapping points, which requires more time digitizing.
  4. The cameras must not move for all of data collection and calibration. The cameras can be calibrated before or after data collection but throughout and between these steps the cameras must remain in the exact same position. Because the camera is often positioned half a meter or more away from the object, a sub-millimeter shift of the camera can translate into a large shift in the image frame, causing rather large inaccuracies.
  5. The focal length (zoom) and focus of the lens must not change for all of data collection and calibration. The calibration is specific to a particular focal length and focus; thus, the cameras will have to be re-calibrated if either of these changes.
Given all of these caveats, the best course of action is to try out several different camera setups and test the accuracy fully before collecting any data (testing the calibration accuracy is detailed in the “Testing the Calibration Accuracy” section). Although your own setup might differ from the camera setup used in this tutorial, it will at least provide an example of one possible setup and how to accommodate the considerations above.

For this tutorial, the cameras were arranged as shown below on the left, using a tabletop to position the calibration grid and objects being photographed. The tabletop can also be used to position lights around the object, if desired.

In this tutorial, all stereo camera images were taken with Nikon cameras, fitted with AF-S DX Nikkor 18-55 mm lenses at a focal length of 55 mm. At a focal length of 55 mm, distortion is nearly undetectable. It is essential that the lenses have minimal distortion. About 50 cm away from the tip of the lens, the checkerboard pattern almost nearly fills the image frame. This means that during the calibration step, we can fully sample the calibration volume with fewer calibration images.
A sample calibration image from one camera in a stereo camera setup.
If you position a tripod on a smooth surface, such as a table top, put small rubber squares under each tripod foot to keep the tripod from slipping.
A small piece of rubber under a tripod leg can keep it from slipping on a smooth surface.
1. Before calibrating or starting to collect data, attach small pieces of tape to a surface in the calibration space.

Add tape to a fixed surface in the camera view to both box the calibration space and test whether the cameras move during data collection or calibration.
This serves both to remind you where the calibrated volume is when positioning objects and it also to test whether the cameras have shifted during data collection.

2. Take photos of the tape frame before beginning and after having taken all of the photos. If the images are identical then the cameras have not shifted significantly.

3. Make sure that all connections/screws in the tripod and between the tripod and the camera are tight. This reduces the possibility of any motion of the cameras during data collection.
Ensure tight connections in the tripod and between the tripod and camera.
It’s best to use a remote (wireless or cord depending on the Nikon model) to release the shutter so you minimizing touching the shutter button on the cameras as much as possible.
Shutter remotes lessen the chances of the cameras moving during data collection.
I’ve found that pressing buttons on the camera lightly (such as for reviewing photos) doesn’t cause significant movement of the cameras but pressing the shutter button requires more force and doing it repeatedly causes the cameras to move significantly over a series of photographs.

4. If your lens has vibration reduction (VR) or automatic focus, be sure to turn both of these off. Vibration reduction uses a small gyroscope in the lens to compensate for camera motion and thus reduce blur. The spinning and stopping of the gyroscope can cause the image frame to shift randomly while taking photos.

Turn off auto-focus and vibration reduction, if applicable.
5. Set the cameras to the smallest aperture (this is the largest f value).
A smaller aperture is ideal because it increases depth of field. Without increasing the lighting, the exposure time will increase.
The smaller the aperture, the greater the depth of field (i.e. the more things are in focus both close and far away from the camera). This is essential in a stereo camera setup because in order to digitize points accurately throughout the calibration volume they must be in focus.

In the next step, we’ll photograph the checkerboard we made in step 1 in different positions and orientations within the stereo camera setup and use these images to calibrate the cameras.

Arranging cameras for curve reconstruction

Collecting curve data using a stereo camera setup requires an extra consideration when arranging the cameras. Landmarks digitized in two different camera views are reconstructed in StereoMorph under the assumption that the same point in 3D space is digitized in both views. In principle, curves can be reconstructed in the same way, by breaking the curve down into a series of landmarks.

There is one complication to this, however. A point halfway along the curve in one camera view is not necessarily the same point as a point halfway along the curve in another camera view. This is due to the perspective effect of lenses. The depth of a 3D curve in one view dictates how it is projected into that image plane. Since the depth of a curve will differ depending on the perspective from which it is viewed, the same curve will be projected differently into different views.

The same curve digitized in two different camera views. Although not immediately obvious, a point halfway along the curve in one view is not necessarily the same point in 3D space as a point halfway along the curve in another.

When the two cameras are calibrated, however, it’s possible to use the calibration to identify corresponding points on the same curve in two camera views and reconstruct these corresponding points just as with landmarks. This is done using epipolar geometry.

The basis of epipolar geometry is that any point in one camera view must fall along a line in another camera view. This line is the epipolar line. The intersection of the epipolar line and the curve in the second view can be used to find the corresponding point on the second curve.
Demonstration of epipolar geometry. The point, in camera view 1 (indicated by a yellow arrow), must fall along a line in camera view 2. This line is the point’s epipolar line and can be used to identify corresponding points along two curves.

But what if the epipolar line is parallel to the curve in the second view?
    
The epipolar line (yellow dashed line in right image) of the point in the first camera view (yellow point in left image) is parallel to the curve in the second view along a considerable portion of its extent. Absent other information, this makes it impossible to identify the corresponding point.
Without any additional information, this makes finding the corresponding curve point in the second view impossible. At some point, StereoMorph might include the ability to compare features along the curve in the image itself to find the corresponding point. It’s best to arrange the specimen or cameras so as to minimize the chance of large sections of the curve being parallel to an epipolar line. This can be done by making a mental note of the following steps:
  1. For the curve of interest, identify a 2D plane in which most of the curve lies. Of course the curve will have some three-dimensional aspect to it (or else you would not be using 3D reconstruction). But find a plane that encompasses most of the curve.
  2. Position the cameras so that a line extending out each lens to the curve is orthogonal to this plane.
As an example, let’s say we’re photographing a curve that resembles a circle (shown in the diagram below).
If we position the cameras so that they are viewing the curve, shown as a circle here, from the side (top left), the curve will appear in the two camera images as a nearly straight line (top middle and right). There is no way we could adequately measure the true shape of the curve because we’re hardly seeing any of it in the images. Additionally, the curve points in the second view will be nearly parallel to the epipolar line of the points in the first view. In fact, these two properties are geometrically related such that if the camera views chosen do not adequately capture the major three-dimensional aspects of the curve, epipolar geometry will also be unable to accurately find the corresponding curve points.

In contrast, if the cameras are positioned so they are viewing the curve from the top (bottom left), the major aspects of the curve’s shape are represented in the images and the curve in the second image will seldom be parallel to the epipolar lines of points along the curve in the first image. As a consequence, StereoMorph currently works best for reconstructing curves that do not have an excessively 3D shape (i.e. 3D spirals) because no matter how the cameras are positioned, these shapes will always have large portions that cannot be adequately captured in both camera views.

Digitizing Photographs

This is step 8 of 10 in the tutorial Collecting 3D shape data using StereoMorph

The StereoMorph package provides a new, easy-to-use digitizing application for collecting landmark and Bézier curves from photographs. Even if you don’t use StereoMorph for collecting 3D landmarks and curves, you might find the digitizing app useful for collecting 2D data from photographs. The app runs in a user's default web browser. Safari, Chrome and Opera all provide full compatibility with the app (Firefox does not allow some features of the app and the app is not tested for Internet Explorer). Although the app runs in a browser you do not have to be connected to the internet to use it. The app runs on a local server, with the R package R package shiny handling communication between the browser and R console. In this step, I’ll provide an overview of how to get started using the app.

StereoMorph Digitizing Application.
1. Load the StereoMorph library into the current R session, if not already loaded and ensure that the StereoMorph Tutorial folder is your current working directory (see Getting Started).

> library(StereoMorph)

2. Call the function digitizeImage() with input parameters specifying the image(s) to be digitized, where the landmarks and curves should be saved and the names of the landmarks and curves to be collected. Using the files in the StereoMorph Tutorial folder, the function call would look like this:

> digitizeImage(
   image.file = 'Object Images',
   landmarks.file = 'Landmarks 2D',
   control.points.file = 'Control points 2D',
   curve.points.file = 'Curve points 2D',
   landmarks.ref = 'landmarks_ref.txt',
   curves.ref = 'curves_ref.txt')

The first argument, image.file, is the only required argument. It specifies the file path of the image or images to be digitized. Here it’s a folder that contains all the object images in the tutorial folder. You can also input a vector of file paths if you only want to digitize a particular set of images. If you input several photographs into digitizeImage() you can switch from one photo to the next within the app.

The argument landmark.file is where the landmarks will be saved to (or loaded from if some landmarks have already been digitized). If you input a folder, the filenames of the landmark files will be saved with the same as the image names, only with the extension “.txt”. If you want different names for the landmark filenames, you can input the file names as a vector.

There are two curve file inputs. The control.points.file specifies where to save the control points. These are the points (the red points in the image below) that you can drag around to define and adjust the curve shape. The curve.points.file specifies where to save the points that actually make up the curve. These are the several hundred points at single pixel spacing (the blue points in the image below) that describe the actual curve and can be used in subsequent analyses.


A 3-point Bézier curve with control points in red and curve points in blue.
The landmarks to be digitized are input via landmarks.ref. In the example above, we’ve input a file path ( landmarks_ref.txt) with the landmarks listed as a single column, each separated by a new line.

     cranium_occipital
     basioccipital_proc_ant
     basipterygoid_proc_ant_L
     basipterygoid_proc_post_L
     ...

landmarks.ref can also be a vector of landmark names, rather than a file path.

The curves to be digitized are also input as a file path (curves_ref.txt). In curves_ref.txt, the curves are in the form of a three-column matrix. The StereoMorph Digitizing App assumes that all curve start and end points are also landmarks. So the curves to be digitized are defined using a matrix where the first column is the name of the curve, the second column is the start point/landmark and the last column is the end point/landmark.

tomium_L     upperbeak_tip         upperbeak_tomium_prox_L
tomium_R     upperbeak_tip         upperbeak_tomium_prox_R
orbit_L      preorbital_proc_L     postorbital_proc_L


For instance, in the above curve reference matrix the first curve is tomium_L, which starts at the landmark upperbeak_tip and ends at the landmark upperbeak_tomium_prox_L. curve_ref can also be a matrix of reference curves, rather than a path to the file containing the matrix.

Once you call digitizeImage() the digitizing app will open in your default web browser, displaying the first image in the Object images folder.
StereoMorph Digitizing Application after loading first image.
The left two-thirds of the window are the image frame. This is where you can navigate around the image and add landmarks and curves using your mouse or trackpad. The right two-thirds is the control panel, with which you can view and save landmark/curve lists and navigate between different images.
StereoMorph Digitizing App control panel.
The ‘Info’ tab contains some basic information about the image, the ‘Landmarks’ tab contains the list of landmarks and the ‘Curves’ tab contains the list of curves.

3. You can navigate around the image in the image frame using the same basic mouse actions as in Google Maps. Position your cursor somewhere over the image and scroll just as you would scroll up and down a web page (using either the scroll wheel of a mouse or the scroll feature on a trackpad). This will zoom in and out of the image. To more quickly navigate around the image, the zoom tracks the position of your cursor and zooms in and out of particular region of the image based on the current position of your cursor. For instance if you wanted to zoom in to the bottom-left corner of the image, you would position your cursor in the bottom-left of the image and scroll in the appropriate direction. This will increase the size of the image while simultaneously positioning the bottom-left corner of the image into the center of the image frame.

Zoom into the object for precise landmark positioning.
4. Click and drag somewhere on the image. This will cause the image to move with your cursor. Just be sure that your cursor is not over a selected landmark or control point - this will cause the marker to move rather than the image.

5. Click on the ‘Landmarks’ tab in the control panel. This lists all of the landmarks input via landmarks.ref and their corresponding pixel coordinates.

6. To add, move or delete a landmark, you must first select that landmark. Click on the corresponding row in the landmarks table or, if the marker is already digitized, double-click on the landmark itself with the mouse (pressing the letter ‘x’ while the cursor is over the landmark in the image frame is a keyboard shortcut).

Selecting a landmark with the cursor. Once selected, the corresponding row will become bold.
7. Once the landmark is selected, set its location by double-clicking anywhere over the image (or positioning the cursor and pressing ‘x’). You can then move the landmark by clicking-and-dragging it or using the arrows on the keyboard.

Move a landmark by clicking-and-dragging with the mouse cursor.
8. To delete a landmark, just type ‘d’ once the landmark is selected. Landmarks with ‘-’ values in the landmarks table will be ignored when saving.

9. Click on the ‘Curves’ tab in the control panel to access the digitized Bézier curves. This lists all of the curves input via curves.ref and their corresponding pixel coordinates, including the starting and ending control points (both are also treated as landmarks) and all the control points in between that define the Bézier curve. The app treats curves with more than three control points as Bézier splines (a string of Bézier curves, joined at the ends).

The Curves tab lists the start, end and middle control points for every Bézier curve.
The control points can be added and moved in same way as the landmarks: by selecting the control point, double-clicking somewhere in the image frame and then clicking-and-dragging or using the arrow keys to reposition. Control points can also be deleted by pressing ‘d’ while the point is selected.

Bézier control points can be moved by clicking-and-dragging with the cursor or using the arrow keys.
10. Once you have collected landmarks or curves, press one of the three submit buttons in the control panel to save them. ‘Submit Landmarks’ only saves the landmarks and ‘Submit Curves’ only saves the curves (both control points and curve points). ‘Submit All’ saves both the landmarks and curves. This saves them to the location you specified in the input to digitizeImage().

Use the 'Submit' Buttons in the bottom of the control panel to save the landmarks and curves.
If a curve.points.file was included, R will also save a file with all of the curve points. They will look something like this,

pterygoid_crest_R0001 1140 1672
pterygoid_crest_R0002 1140 1671
pterygoid_crest_R0003 1140 1670
pterygoid_crest_R0004 1140 1669
pterygoid_crest_R0005 1140 1668
...

at single pixel spacing and with numbers added after the curve name to indicate the point order. These are the points we’ll use in the next step to reconstruct the curves into 3D.

11. Click ‘Next Image’. This will move to the next image if more than one image or a folder of images is specified in image.file.

Move to the previous or next image using the buttons at the bottom of the control panel.
Although, this tutorial uses the full features of the app, you can also use the app to digitize only landmarks or only curves. To only digitize landmarks, call digitizeImage() with only a landmark file and landmark reference list.

> digitizeImage(
   image.file = 'Object Images',
   landmarks.file = 'Landmarks 2D',
   landmarks.ref = 'landmarks_ref.txt')

You can also call digitizeImage() with just image.file if you’d simply like to check the pixel coordinates of a few features in a photograph.

> digitizeImage(image.file = 'Object Images')

For more detailed instructions on how to use the StereoMorph Digitizing App, check out the digitizeImage() function in the StereoMorph package manual or click on the ‘Help’ link at the top right corner of the application window. The help file will open in a new browser window and has information on all the features available in the digitizing app, including keyboard shortcuts.

Opening the digitizing app help file.
The next step will cover how to reconstruct the digitized landmarks and curve points in 3D.

Go back to the previous step: Photographing an object

Sunday, April 13, 2014

Unifying, reflecting and aligning

This is step 10 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section will demonstrate how to unify 3D landmarks and curve points from several aspects into a single point set, reflect landmarks missing on one side and align the whole set to the midline plane.

Unifying landmarks


If the same object has been photographed in more than one position (aspects), the landmarks and curve points collected from each aspect will be in different coordinate systems.

Aspect 1
Aspect 2
Aspect 3
In order to combine these three aspects, they must be unified based on shared points. This can be done in StereoMorph using the unifyLandmarks() function.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> library(rgl)
> library(StereoMorph)

2. Specify the file paths of each aspect of 3D landmarks and curves. For demonstration, the landmarks and curve points reconstructed in the previous section will be used.

> landmarks_3d <- paste0("Landmarks and curves 3D/obj_a", 1:3, ".txt")

3. Read each of these into an array using the readLandmarksToArray() function.

> lm.array <- readLandmarksToArray(landmarks_3d, row.names=1)

4. Call unifyLandmarks() to unify all the aspects into a single point set.

> unify_lm <- unifyLandmarks(lm.array, min.common=5)

The unifyLandmarks() function begins by choosing two point sets and aligning them with each other based on three or more shared points. Then, any additional point sets are unified with this combined point set, one-by-one, saving each unified point set at each step as the new combined point set.

unifyLandmarks() finds an ideal sequence based on the error of unification (how well the common points line up with each other). By setting min.common to 5, unifyLandmarks() will only align two aspects if they share at least five points. This does not mean that all point sets have to share the same five points; once two point sets are unified, any points shared between those two point sets and the next point set will be used in the alignment. Thus, the number of shared points used for each unification will usually depend on the order in which the points are unified. While it’s possible to unify points based on only three points this is likely to cause poor alignments, especially if the shared points happen to be nearly collinear.

5. Use the summary() function to see the unification sequence and errors.

The first part shows the sequence in which the point sets were unified followed by the root-mean-square error (here, in millimeters) for each unification step.

> summary(unify_lm)

unifyLandmarks Summary
Unification sequence: 1, 2, 3
Unification RMSE:
   0.7412698
   0.8801677

The second part shows the unification errors by landmark for each sequence (indicated by the number in double brackets). Both indicate that the unification errors are fairly low. The major source of error here is the difficulty of finding the exact same points from different views of an object.

Unification landmark errors by sequence:
    [[1]]
        basipterygoid_proc_post_R 0.160657
        jugal_upperbeak_R 1.046571
        opisthotic_process_R 0.8580522
        ...

    [[2]]
        foramen_magnum_sup 0.7183007
        mand_condyle_quadrate_lat_L 0.6221617
        mand_condyle_quadrate_lat_R 0.2217136
        ...

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(unify_lm$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)

All landmarks and curve points unified into a single point set.
7. Save the unified landmarks to a text file.

> write.table(unify_lm$lm.matrix, file="Landmarks 3D unified/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Reflecting missing landmarks


When collecting landmarks from objects with left/right (bilateral) symmetry, it’s often not possible to collect every landmark on both sides of the object. In this case, the plane of bilateral symmetry can be used to reflect landmarks only present on one side across the midline, creating a set of landmarks complete on both sides (Klingenberg, Barluenga & Meyer 2002). This can be done using the reflectMissingLandmarks() function in StereoMorph.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_unified <- paste0("Landmarks 3D unified/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_unified, row.names=1)

4. Call reflectMissingLandmarks().

> reflect <- reflectMissingLandmarks(lm.matrix, average=TRUE)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> reflect <- reflectMissingLandmarks(unify_lm$lm.matrix, average=TRUE)

Setting average to TRUE will reflect the missing landmarks across the midline and then average the left and right sides (so that they are mirror images).

5. Print a summary of the bilateral errors.

> summary(reflect)

The bilateral errors are calculated by measuring the distance between a point present on both sides already and its contralateral point after reflection (and before averaging).

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(reflect$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


Unified landmarks after reflecting landmarks missing on one side across the midline.
7. Save the reflected landmarks to a text file.

> write.table(reflect$lm.matrix, file="Landmarks 3D reflected/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Aligning landmarks to the midline


The 3D landmark and curve points resulting from the previous steps of reconstruction and unification will be arbitrarily positioned and oriented in 3D space. For visualization purposes it’s often desirable to align a set of landmarks to the midline plane so that multiple objects can be viewed in a consistent orientation.

This can be done with the StereoMorph function alignLandmarksToMidline(). This function translates and rotates the points so that the midline points are aligned with the xy-plane. If bilateral landmarks were averaged in the reflection step, the midline points will lie exactly in the midline plane (i.e. they will have z-values of zero). Aligning landmarks to the midline plane does not change the scaling of the landmarks or the position of any landmarks relative to one another.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_reflected <- paste0("Landmarks 3D reflected/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_reflected, row.names=1)

4. Call reflectMissingLandmarks().

> align <- alignLandmarksToMidline(lm.matrix)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> align <- alignLandmarksToMidline(reflect$lm.matrix)

Calling summary() will print the midline alignment errors (the distances between the midline points and the estimated midline plane). However, since the bilateral landmarks were averaged in the reflection step, all of the midline points were already aligned to the midline plane. Thus, all of the midline alignment errors are zero.

5. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(align$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


The landmarks are now aligned along the midline and landmarks on opposite sides have the same coordinates except an equal and opposite value along the z-axis.

> align$lm.matrix['quadrate_jugal_L', ]
[1] -45.431501 -6.010435 16.372618
> align$lm.matrix['quadrate_jugal_R', ]
[1] -45.431501 -6.010435 -16.372618


6. Save the aligned landmarks to a text file.

> write.table(align$lm.matrix, file="Landmarks 3D aligned/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

In ten steps, we've constructed a stereo camera setup, photographed and digitized landmarks and curves on the specimen and created a 3D reconstruction of these landmarks and curves. I hope that you've found this tutorial helpful and easy-to-follow. Please feel free to leave questions, comments and suggestions.


Unified landmarks after aligning the landmarks to the midline.
Return to the main tutorial page: Collecting 3D shape data using StereoMorph
Go back to the previous step: Reconstructing 2D points and curves into 3D

Saturday, April 12, 2014

Reconstructing 2D points and curves into 3D

This is step 10 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section will demonstrate how to unify 3D landmarks and curve points from several aspects into a single point set, reflect landmarks missing on one side and align the whole set to the midline plane.

Unifying landmarks


If the same object has been photographed in more than one position (aspects), the landmarks and curve points collected from each aspect will be in different coordinate systems.

Aspect 1
Aspect 2
Aspect 3
In order to combine these three aspects, they must be unified based on shared points. This can be done in StereoMorph using the unifyLandmarks() function.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> library(rgl)
> library(StereoMorph)

2. Specify the file paths of each aspect of 3D landmarks and curves. For demonstration, the landmarks and curve points reconstructed in the previous section will be used.

> landmarks_3d <- paste0("Landmarks and curves 3D/obj_a", 1:3, ".txt")

3. Read each of these into an array using the readLandmarksToArray() function.

> lm.array <- readLandmarksToArray(landmarks_3d, row.names=1)

4. Call unifyLandmarks() to unify all the aspects into a single point set.

> unify_lm <- unifyLandmarks(lm.array, min.common=5)

The unifyLandmarks() function begins by choosing two point sets and aligning them with each other based on three or more shared points. Then, any additional point sets are unified with this combined point set, one-by-one, saving each unified point set at each step as the new combined point set.

unifyLandmarks() finds an ideal sequence based on the error of unification (how well the common points line up with each other). By setting min.common to 5, unifyLandmarks() will only align two aspects if they share at least five points. This does not mean that all point sets have to share the same five points; once two point sets are unified, any points shared between those two point sets and the next point set will be used in the alignment. Thus, the number of shared points used for each unification will usually depend on the order in which the points are unified. While it’s possible to unify points based on only three points this is likely to cause poor alignments, especially if the shared points happen to be nearly collinear.

5. Use the summary() function to see the unification sequence and errors.

The first part shows the sequence in which the point sets were unified followed by the root-mean-square error (here, in millimeters) for each unification step.

> summary(unify_lm)

unifyLandmarks Summary
Unification sequence: 1, 2, 3
Unification RMSE:
   0.7412698
   0.8801677

The second part shows the unification errors by landmark for each sequence (indicated by the number in double brackets). Both indicate that the unification errors are fairly low. The major source of error here is the difficulty of finding the exact same points from different views of an object.

Unification landmark errors by sequence:
    [[1]]
        basipterygoid_proc_post_R 0.160657
        jugal_upperbeak_R 1.046571
        opisthotic_process_R 0.8580522
        ...

    [[2]]
        foramen_magnum_sup 0.7183007
        mand_condyle_quadrate_lat_L 0.6221617
        mand_condyle_quadrate_lat_R 0.2217136
        ...

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(unify_lm$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)

All landmarks and curve points unified into a single point set.
7. Save the unified landmarks to a text file.

> write.table(unify_lm$lm.matrix, file="Landmarks 3D unified/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Reflecting missing landmarks


When collecting landmarks from objects with left/right (bilateral) symmetry, it’s often not possible to collect every landmark on both sides of the object. In this case, the plane of bilateral symmetry can be used to reflect landmarks only present on one side across the midline, creating a set of landmarks complete on both sides (Klingenberg, Barluenga & Meyer 2002). This can be done using the reflectMissingLandmarks() function in StereoMorph.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_unified <- paste0("Landmarks 3D unified/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_unified, row.names=1)

4. Call reflectMissingLandmarks().

> reflect <- reflectMissingLandmarks(lm.matrix, average=TRUE)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> reflect <- reflectMissingLandmarks(unify_lm$lm.matrix, average=TRUE)

Setting average to TRUE will reflect the missing landmarks across the midline and then average the left and right sides (so that they are mirror images).

5. Print a summary of the bilateral errors.

> summary(reflect)

The bilateral errors are calculated by measuring the distance between a point present on both sides already and its contralateral point after reflection (and before averaging).

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(reflect$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


Unified landmarks after reflecting landmarks missing on one side across the midline.
7. Save the reflected landmarks to a text file.

> write.table(reflect$lm.matrix, file="Landmarks 3D reflected/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Aligning landmarks to the midline


The 3D landmark and curve points resulting from the previous steps of reconstruction and unification will be arbitrarily positioned and oriented in 3D space. For visualization purposes it’s often desirable to align a set of landmarks to the midline plane so that multiple objects can be viewed in a consistent orientation.

This can be done with the StereoMorph function alignLandmarksToMidline(). This function translates and rotates the points so that the midline points are aligned with the xy-plane. If bilateral landmarks were averaged in the reflection step, the midline points will lie exactly in the midline plane (i.e. they will have z-values of zero). Aligning landmarks to the midline plane does not change the scaling of the landmarks or the position of any landmarks relative to one another.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_reflected <- paste0("Landmarks 3D reflected/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_reflected, row.names=1)

4. Call reflectMissingLandmarks().

> align <- alignLandmarksToMidline(lm.matrix)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> align <- alignLandmarksToMidline(reflect$lm.matrix)

Calling summary() will print the midline alignment errors (the distances between the midline points and the estimated midline plane). However, since the bilateral landmarks were averaged in the reflection step, all of the midline points were already aligned to the midline plane. Thus, all of the midline alignment errors are zero.

5. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(align$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


The landmarks are now aligned along the midline and landmarks on opposite sides have the same coordinates except an equal and opposite value along the z-axis.

> align$lm.matrix['quadrate_jugal_L', ]
[1] -45.431501 -6.010435 16.372618
> align$lm.matrix['quadrate_jugal_R', ]
[1] -45.431501 -6.010435 -16.372618


6. Save the aligned landmarks to a text file.

> write.table(align$lm.matrix, file="Landmarks 3D aligned/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Unified landmarks after aligning the landmarks to the midline.
In ten steps, we've constructed a stereo camera setup, photographed and digitized landmarks and curves on the specimen and created a 3D reconstruction of these landmarks and curves. I hope that you've found this tutorial helpful and easy-to-follow. Please feel free to leave questions, comments and suggestions.


Return to the main tutorial page: Collecting 3D shape data using StereoMorph
Go back to the previous step: Reconstructing 2D points and curves into 3D

Friday, April 11, 2014

Photographing an object

This is step 7 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section provides tips on how to photograph an object for shape data collection.

Recommended materials for this section:
  • A yard or less of black velvet

This is the step where the DLT method has a major advantage over other 3D morphometric methods. Once the cameras are calibrated, the number of objects that can be photographed is only limited by the time it takes to position and photograph each object.

It is best to have a uniform background that provides good contrast to your specimen. First, this can decrease the photo size by as much as half (encoding a large black space takes up less space than a multi-colored, noisy background). If you’re taking several hundred photographs this is advantageous for data storage. Second, it’s easier to discern points on the edge of the specimen when the edge is clearly distinguishable from the background. For light-colored specimens, black velvet works well. The cheapest stuff available at fabric stores works great and only costs about $10 a yard.

A shell on black velvet. Black velvet works great as a solid, black background.
If you need to collect landmarks from several different places on an object that are not visible in a single camera view, reposition the object a few times, taking a photograph from both camera views each time. For the StereoMorph functions, these different orientations of the object are referred to as “aspects”.

The tutorial data set contains landmarks and curves from these three different aspects of a Canada Goose skull. The first aspect provides views of the ventral aspect (underside) of the skull.

First aspect for digitizing landmarks on the ventral side of the skull. The left and right images are the first and second camera views. The specimen is in the same position in both images, just viewed from different perspectives.
From the second aspect, landmarks on the lateral or side aspect of the skull can be digitized.

Second aspect for digitizing landmarks on the ventral aspect of the skull.
And the third aspect offers views of the back of the skull.

Third aspect for digitizing landmarks on the lateral aspect of the skull.
Depending on the data you want to collect you might be able to get away with a single image of each specimen, in which case landmarks and curves only have to be digitized in two images. If you have multiple aspects, you will need some overlap in landmarks among the images (at least three, preferably five to six) in order to combine all of the points into a single 3D point set (detailed in the section “Unifying, Reflecting and Aligning”). However, you don't have to digitize the same landmarks in every aspect.

Lastly, sometimes it's necessary to use a reference marker if it's difficult to find the exact same point between two views. I do this when digitizing beaks with a broad, flat tip. If using museum specimens, one should of course use tape that does not leave a residue.

No-residue tape can be used for points that are difficult to identify exactly in two different views.
Go to the next step: Digitizing photographs
Go back to the previous step: Testing the calibration accuracy

Testing the calibration accuracy

This is step 6 of 10 in the tutorial Collecting 3D shape data using StereoMorph

The previous section demonstrated how to use a checkerboard pattern of known square size and the dltCalibrateCameras() function to calibrate two cameras in stereo. While dltCalibrateCameras() returns calibration and reconstruction RMS errors, these are measures of how well the DLT camera model fits the calibration points and not the reconstruction accuracy per se. Moreover, the RMS errors are scale-independent. If we have not measured the calibration checkerboard square size correctly, the RMS errors will be unaffected but our reconstructions will be improperly scaled. This section will demonstrate how to use a checkerboard (ideally having a square size different from that used in the calibration step) to test the accuracy of a calibrated stereo camera setup.

1. Repeat the steps in “Creating a Checkerboard Pattern” to create another checkerboard of a different square size than that used in the calibration. This will allow you to test whether you have the proper scaling for the calibration checkerboard (if they had the same square size, we would not be able to test this). For this tutorial, I used a checkerboard printed at 9% scaling for the calibration and a checkerboard printed at 6% scaling to test the calibration accuracy.

Two checkerboards, printed at 9% and 6% scaling. In this tutorial, the cameras are calibrated with the left pattern and the calibration is tested with the right.
2. Repeat the steps in Measuring Checkerboard Square Size to measure the square size of the test checkerboard pattern. The 6% scaled test checkerboard in this tutorial has a square size of 4.233 mm when measured using a precision rule.

Measuring the square size of the test checkerboard pattern (21 x 14, printed at 6% scaling).
3. Take photographs of the new checkerboard pattern as in Calibrating Stereo Cameras , ensuring a sampling of points throughout the calibrated volume (anywhere the object you're digitizing could conceivably be).

Eight photos of the test checkerboard (a different size than the calibration checkerboard) per camera view used in the test calibration step. The tutorial includes a test checkerboard in a total of 11 different positions.
4. Import the photographs from different views into two different folders. For demonstration, we’ll use the test calibration photographs in the folder ‘Test Calibration images’ (in the StereoMorph Tutorial folder). Photographs from camera one are in the folder ‘v1’ and photographs from camera two are in the folder ‘v2’.

5. Load the StereoMorph library into the current R session and ensure that the StereoMorph Tutorial folder is your current working directory.

> library(StereoMorph)

Just as in the calibration step, we’ll use the findCheckerboardCorners() function to find the internal corners in each calibration image.

6. Specify the number of internal corners in the checkerboard.

> nx <- 21
> ny <- 14


7. Specify the file locations of the test calibration images, where to save the checkerboard corners and, if desired, where to save the verification images.

> test_image_file <- paste0('Test images/v', c(1, 2))
> test_corner_file <- paste0('Test corners/v', c(1, 2))
> test_verify_file <- paste0('Test images verify/v', c(1, 2))


8. Call findCheckerboardCorners().

> corners <- findCheckerboardCorners(image.file=test_image_file, nx=nx, ny=ny, corner.file=test_corner_file, verify.file=test_verify_file)

The function will find the corners successfully for all but three of the images in the tutorial set. Again, don’t worry if the function fails for a couple of images; these will be ignored in subsequent steps.

9. Once findCheckerboardCorners() has finished running, check all of the verification images to be sure that the order is consistent within each camera view and between the two views. For the tutorial images the corners in the first view are in the reverse order relative to the order in the second view (as was also seen with the calibration images). This will be corrected when the corners are read in from the corner files.

10. Import the corners just saved to individual files by constructing a two-column matrix of file paths, where the columns correspond to each view.

> corners_by_file <- cbind(
paste0(test_corner_file[1], '/', paste0('DSC_00', 11:20, '.txt')),
paste0(test_corner_file[2], '/', paste0('DSC_00', 11:20, '.txt')))


11. Call readCheckerboardsToArray() to read all of the corner matrices into an array; include all files – empty files will read in as NAs.

> test_corners <- readCheckerboardsToArray(file=corners_by_file, nx=nx, ny=ny, col.reverse=c(F, T), row.reverse=c(F, T))

As in the calibration step, the vector c(F, T) is passed to col.reverse and row.reverse to reverse the order of the corners from the second view relative to the first.

Since we are importing corners from 10 images, the third dimension of the test_corners array is 10.

> dim(test_corners)
[1] 294 2 10 2


12. Set grid_size_test to the square size of the test calibration checkerboard. The tutorial test calibration checkerboard, measured using a precision ruler (see Measuring Checkerboard Square Size ), is approximately 4.323 mm.

> grid_size_cal <- 4.2327

13. Load in the calibration coefficients.

> cal.coeff <- as.matrix(read.table(file="cal_coeffs.txt"))

14. Call dltTestCalibration(). Image pairs for which corners were found in neither or only one image will be ignored.

> dlt_test <- dltTestCalibration(cal.coeff=cal.coeff, coor.2d=test_corners, nx=nx, grid.size=grid_size_test)

15. Use the summary() function to print a summary of the accuracy test.

> summary(dlt_test)

dltTestCalibration Summary
    Number of grids: 7
    Number of points: 1029
    Aligned ideal to reconstructed (AITR) point position errors:
        AITR RMS Errors (X, Y, Z): 0.02343, 0.01799, 0.02063
        Mean AITR Distance Error: 0.03297
        AITR Distance RMS Error: 0.03605
    Inter-point distance (IPD) errors:
        IPD RMS Error: 0.03090397
        IPD Mean Absolute Error: 0.02392322
        Mean IPD error: -0.002694208
    Adjacent-pair distance errors:
        Mean adjacent-pair distance error: 0.0004335479
        Mean adjacent-pair absolute distance error: 0.01592422
        SD of adjacent-pair distance error: 0.01840659
    Epipolar errors:
        Epipolar RMS Error: 1.594549 px
        Epipolar Mean Error: 1.555131 px
        SD of Epipolar Error: 0.352441 px


Now to unpack this summary. One of the challenges to assessing the accuracy of a DLT calibration is that any reconstructed points will be in the coordinate system of the points used to calibrate the cameras. This means that even if we had an object with points of known 3D position, reconstructing the object using the DLT coefficients would yield 3D points arbitrarily translated and rotated to somewhere in 3D space. We'd have to perform some alignment step in order to compare the reference 3D points to the reconstructed points. The alignment will cause underestimation of larger errors and overestimation of smaller errors. The best solution is to use a variety of different accuracy metrics that, when taken together provide a complete assessment of accuracy.

dltTestCalibration() provides four assessments of calibration accuracy:
  1. Aligned ideal to reconstructed (AITR) error. For every checkerboard that is reconstructed, the function takes an ideal checkerboard of the same dimensions (uniform square sizes and planar) and aligns the ideal corner points to the reconstructed corner points using least squares alignment. Then, the distance is measured between each ideal point and its corresponding reconstructed points. If the points were perfectly reconstructed, the ideal and reconstructed points would overlap perfectly. The “AITR RMS (root mean square) errors” are first measured along each axis (x, y and z in the coordinate system of the calibration points). This is one way to quantify how accuracy differs along different dimensions. The “mean AITR distance error” is the mean 3D distance between ideal and reconstructed points. This will usually be larger than any of the single axis errors since it incorporates error along all axes. This is also returned as RMS error. One disadvantage of this measure is that the ideal grid will be pulled toward areas of high error to minimize the total alignment error. This can cause underestimation of larger errors and overestimation of smaller errors.
  2. Inter-point distance (IPD) error. This summarizes distance rather than positional errors. For every reconstructed checkerboard random pairs of points (without re-sampling) are chosen and the distance between them is compared to the actual distance in an ideal grid (again, uniform square sizes and planar). This measure avoids the problems with the alignment step in AITR error but doesn't readily provide any information of error along a particular dimension (although this could perhaps be assessed by taking into account the positions of the reconstructed points). The distance errors are returned as “IPD RMS Error” and “IPD Mean Absolute Error”. The reconstructed distances can be either shorter or longer than the actual distance. The "Mean IPD error" takes the mean of these errors. If there is no bias toward over- or underestimation of distance this should be nearly zero. The results will differ slightly at each run because the point pairs are chosen randomly.
  3. Adjacent-pair distance errors. This is identical to IPD error except that randomly chosen points are adjacent on the grid. This means the ideal distances are uniform and the minimum possible distance for IPD error assessment. This is a common stereo camera error assessment used in the literature (e.g. Tashman & Anderst 2003; Brainerd et al. 2010). Since the points in each pair are uniformly close together, their mean position (the mid-point) can be used to look at how IPD error varies as a function of position in the calibration volume. These errors will usually be slightly less than the general IPD errors since error is likely to be greater for points at a greater distance from one another.
  4. Epipolar errors. In a stereo camera setup, a point in one camera view must fall along a line in a second camera view. This line is that point's epipolar line. The distance between a point's epipolar line and its corresponding point in that second camera view is the epipolar error. Since the input to dltTestCalibration() includes the same point in two or more camera views, we can use epipolar error to assess calibration accuracy. Epipolar error must be low to identify corresponding points along curves in different views. The mean and standard deviation of epipolar error is returned in pixels and should be less than a couple of pixels.
So how accurate is the calibration? Millimeters were used as units in measuring the checkerboard square size so all values not in pixels are in millimeters. The positional errors along all three dimensions are around 20 microns on average and the different distance errors range from 16-36 microns. The "mean IPD error" shows a slight bias towards underestimating inter-point distances but only by 3 microns on average. And for points closer together, this bias is less than a micron (0.0003 mm). Lastly, the epipolar error is less than two pixels on average, with a low standard deviation.

It's important to compare these errors to the total calibrated volume. Since the calibration checkerboard is 21 x 14 and the square size is approximately 6.36 mm, each dimension of the calibrated volume is at least 89 mm (14*6.36). 20 microns represents 0.02% positional error (0.020 mm/89 mm), an extremely low error.

dltTestCalibration() returns all of the values used to calculate the stats in the summary output. So plot() and hist() can be used to look at the full error values. For instance, we can look at a histogram of all the IPD errors.

16. Create a histogram of all the inter-point distance errors, using hist().

> hist(dlt_test$ipd.error, breaks=20)
A histogram of inter-point distance errors (in mm); N = 1029.
The histogram shows that nearly all of the distances measured between random pairs of points across each checkerboard are within 0.100 mm of their actual distance.

17. Create a histogram of all the inter-point distance errors, considering only adjacent internal corners (in this case, points within about 4 mm of each other).

> hist(dlt_test$adj.pair.ipd.error, breaks=20)
A histogram of adjacent inter-point distance errors (in mm); N = 980.
When considering only random pairs of adjacent points on each checkerboard, nearly all of the inter-point distances are within 0.040 mm of their actual distance.

18. To test how reconstruction error varies as a function of the distance from the center of the calibrated volume (i.e. do reconstructed points in the periphery of the calibrated space have a higher error than points in the middle?), plot the adjacent inter-point distance as a function of the distance of each adjacent pair from the centroid of all adjacent pairs (an approximate center of the calibrated volume).

> plot(dlt_test$adj.pair.centroid.dist, dlt_test$adj.pair.ipd.error)
IPD error for adjacent points versus the mean position of adjacent pairs along the z-axis.
There is no obvious trend in the plot, indicating that reconstruction error is uniform throughout the calibrated volume.

19. To test how reconstruction error varies as a function of the position along a particular axis, plot the adjacent inter-point distance as a function of the mean position of each adjacent pair along an axis.

> plot(dlt_test$adj.pair.mean.pos[, 3], dlt_test$adj.pair.ipd.error)
IPD error for adjacent points versus the distance of the adjacent pairs from the centroid.
There is also no obvious trend in this plot, providing a second confirmation that reconstruction error is uniform throughout the calibrated volume. The axes here are defined in the same coordinate system as the 3D calibration coordinates estimated in the calibration step. Thus, the orientation of these axes relative to the cameras is arbitrary and will depend on the orientations of the checkerboard patterns used in the calibration.

Now that the cameras are accurately calibrated, the next section will provide instructions on photographing an object for the collection of shape data.

Go to the next step: Photographing an object
Go back to the previous step: Calibrating stereo cameras