Sunday, April 13, 2014

Unifying, reflecting and aligning

This is step 10 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section will demonstrate how to unify 3D landmarks and curve points from several aspects into a single point set, reflect landmarks missing on one side and align the whole set to the midline plane.

Unifying landmarks


If the same object has been photographed in more than one position (aspects), the landmarks and curve points collected from each aspect will be in different coordinate systems.

Aspect 1
Aspect 2
Aspect 3
In order to combine these three aspects, they must be unified based on shared points. This can be done in StereoMorph using the unifyLandmarks() function.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> library(rgl)
> library(StereoMorph)

2. Specify the file paths of each aspect of 3D landmarks and curves. For demonstration, the landmarks and curve points reconstructed in the previous section will be used.

> landmarks_3d <- paste0("Landmarks and curves 3D/obj_a", 1:3, ".txt")

3. Read each of these into an array using the readLandmarksToArray() function.

> lm.array <- readLandmarksToArray(landmarks_3d, row.names=1)

4. Call unifyLandmarks() to unify all the aspects into a single point set.

> unify_lm <- unifyLandmarks(lm.array, min.common=5)

The unifyLandmarks() function begins by choosing two point sets and aligning them with each other based on three or more shared points. Then, any additional point sets are unified with this combined point set, one-by-one, saving each unified point set at each step as the new combined point set.

unifyLandmarks() finds an ideal sequence based on the error of unification (how well the common points line up with each other). By setting min.common to 5, unifyLandmarks() will only align two aspects if they share at least five points. This does not mean that all point sets have to share the same five points; once two point sets are unified, any points shared between those two point sets and the next point set will be used in the alignment. Thus, the number of shared points used for each unification will usually depend on the order in which the points are unified. While it’s possible to unify points based on only three points this is likely to cause poor alignments, especially if the shared points happen to be nearly collinear.

5. Use the summary() function to see the unification sequence and errors.

The first part shows the sequence in which the point sets were unified followed by the root-mean-square error (here, in millimeters) for each unification step.

> summary(unify_lm)

unifyLandmarks Summary
Unification sequence: 1, 2, 3
Unification RMSE:
   0.7412698
   0.8801677

The second part shows the unification errors by landmark for each sequence (indicated by the number in double brackets). Both indicate that the unification errors are fairly low. The major source of error here is the difficulty of finding the exact same points from different views of an object.

Unification landmark errors by sequence:
    [[1]]
        basipterygoid_proc_post_R 0.160657
        jugal_upperbeak_R 1.046571
        opisthotic_process_R 0.8580522
        ...

    [[2]]
        foramen_magnum_sup 0.7183007
        mand_condyle_quadrate_lat_L 0.6221617
        mand_condyle_quadrate_lat_R 0.2217136
        ...

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(unify_lm$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)

All landmarks and curve points unified into a single point set.
7. Save the unified landmarks to a text file.

> write.table(unify_lm$lm.matrix, file="Landmarks 3D unified/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Reflecting missing landmarks


When collecting landmarks from objects with left/right (bilateral) symmetry, it’s often not possible to collect every landmark on both sides of the object. In this case, the plane of bilateral symmetry can be used to reflect landmarks only present on one side across the midline, creating a set of landmarks complete on both sides (Klingenberg, Barluenga & Meyer 2002). This can be done using the reflectMissingLandmarks() function in StereoMorph.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_unified <- paste0("Landmarks 3D unified/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_unified, row.names=1)

4. Call reflectMissingLandmarks().

> reflect <- reflectMissingLandmarks(lm.matrix, average=TRUE)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> reflect <- reflectMissingLandmarks(unify_lm$lm.matrix, average=TRUE)

Setting average to TRUE will reflect the missing landmarks across the midline and then average the left and right sides (so that they are mirror images).

5. Print a summary of the bilateral errors.

> summary(reflect)

The bilateral errors are calculated by measuring the distance between a point present on both sides already and its contralateral point after reflection (and before averaging).

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(reflect$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


Unified landmarks after reflecting landmarks missing on one side across the midline.
7. Save the reflected landmarks to a text file.

> write.table(reflect$lm.matrix, file="Landmarks 3D reflected/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Aligning landmarks to the midline


The 3D landmark and curve points resulting from the previous steps of reconstruction and unification will be arbitrarily positioned and oriented in 3D space. For visualization purposes it’s often desirable to align a set of landmarks to the midline plane so that multiple objects can be viewed in a consistent orientation.

This can be done with the StereoMorph function alignLandmarksToMidline(). This function translates and rotates the points so that the midline points are aligned with the xy-plane. If bilateral landmarks were averaged in the reflection step, the midline points will lie exactly in the midline plane (i.e. they will have z-values of zero). Aligning landmarks to the midline plane does not change the scaling of the landmarks or the position of any landmarks relative to one another.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_reflected <- paste0("Landmarks 3D reflected/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_reflected, row.names=1)

4. Call reflectMissingLandmarks().

> align <- alignLandmarksToMidline(lm.matrix)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> align <- alignLandmarksToMidline(reflect$lm.matrix)

Calling summary() will print the midline alignment errors (the distances between the midline points and the estimated midline plane). However, since the bilateral landmarks were averaged in the reflection step, all of the midline points were already aligned to the midline plane. Thus, all of the midline alignment errors are zero.

5. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(align$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


The landmarks are now aligned along the midline and landmarks on opposite sides have the same coordinates except an equal and opposite value along the z-axis.

> align$lm.matrix['quadrate_jugal_L', ]
[1] -45.431501 -6.010435 16.372618
> align$lm.matrix['quadrate_jugal_R', ]
[1] -45.431501 -6.010435 -16.372618


6. Save the aligned landmarks to a text file.

> write.table(align$lm.matrix, file="Landmarks 3D aligned/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

In ten steps, we've constructed a stereo camera setup, photographed and digitized landmarks and curves on the specimen and created a 3D reconstruction of these landmarks and curves. I hope that you've found this tutorial helpful and easy-to-follow. Please feel free to leave questions, comments and suggestions.


Unified landmarks after aligning the landmarks to the midline.
Return to the main tutorial page: Collecting 3D shape data using StereoMorph
Go back to the previous step: Reconstructing 2D points and curves into 3D

Saturday, April 12, 2014

Reconstructing 2D points and curves into 3D

This is step 10 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section will demonstrate how to unify 3D landmarks and curve points from several aspects into a single point set, reflect landmarks missing on one side and align the whole set to the midline plane.

Unifying landmarks


If the same object has been photographed in more than one position (aspects), the landmarks and curve points collected from each aspect will be in different coordinate systems.

Aspect 1
Aspect 2
Aspect 3
In order to combine these three aspects, they must be unified based on shared points. This can be done in StereoMorph using the unifyLandmarks() function.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> library(rgl)
> library(StereoMorph)

2. Specify the file paths of each aspect of 3D landmarks and curves. For demonstration, the landmarks and curve points reconstructed in the previous section will be used.

> landmarks_3d <- paste0("Landmarks and curves 3D/obj_a", 1:3, ".txt")

3. Read each of these into an array using the readLandmarksToArray() function.

> lm.array <- readLandmarksToArray(landmarks_3d, row.names=1)

4. Call unifyLandmarks() to unify all the aspects into a single point set.

> unify_lm <- unifyLandmarks(lm.array, min.common=5)

The unifyLandmarks() function begins by choosing two point sets and aligning them with each other based on three or more shared points. Then, any additional point sets are unified with this combined point set, one-by-one, saving each unified point set at each step as the new combined point set.

unifyLandmarks() finds an ideal sequence based on the error of unification (how well the common points line up with each other). By setting min.common to 5, unifyLandmarks() will only align two aspects if they share at least five points. This does not mean that all point sets have to share the same five points; once two point sets are unified, any points shared between those two point sets and the next point set will be used in the alignment. Thus, the number of shared points used for each unification will usually depend on the order in which the points are unified. While it’s possible to unify points based on only three points this is likely to cause poor alignments, especially if the shared points happen to be nearly collinear.

5. Use the summary() function to see the unification sequence and errors.

The first part shows the sequence in which the point sets were unified followed by the root-mean-square error (here, in millimeters) for each unification step.

> summary(unify_lm)

unifyLandmarks Summary
Unification sequence: 1, 2, 3
Unification RMSE:
   0.7412698
   0.8801677

The second part shows the unification errors by landmark for each sequence (indicated by the number in double brackets). Both indicate that the unification errors are fairly low. The major source of error here is the difficulty of finding the exact same points from different views of an object.

Unification landmark errors by sequence:
    [[1]]
        basipterygoid_proc_post_R 0.160657
        jugal_upperbeak_R 1.046571
        opisthotic_process_R 0.8580522
        ...

    [[2]]
        foramen_magnum_sup 0.7183007
        mand_condyle_quadrate_lat_L 0.6221617
        mand_condyle_quadrate_lat_R 0.2217136
        ...

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(unify_lm$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)

All landmarks and curve points unified into a single point set.
7. Save the unified landmarks to a text file.

> write.table(unify_lm$lm.matrix, file="Landmarks 3D unified/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Reflecting missing landmarks


When collecting landmarks from objects with left/right (bilateral) symmetry, it’s often not possible to collect every landmark on both sides of the object. In this case, the plane of bilateral symmetry can be used to reflect landmarks only present on one side across the midline, creating a set of landmarks complete on both sides (Klingenberg, Barluenga & Meyer 2002). This can be done using the reflectMissingLandmarks() function in StereoMorph.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_unified <- paste0("Landmarks 3D unified/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_unified, row.names=1)

4. Call reflectMissingLandmarks().

> reflect <- reflectMissingLandmarks(lm.matrix, average=TRUE)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> reflect <- reflectMissingLandmarks(unify_lm$lm.matrix, average=TRUE)

Setting average to TRUE will reflect the missing landmarks across the midline and then average the left and right sides (so that they are mirror images).

5. Print a summary of the bilateral errors.

> summary(reflect)

The bilateral errors are calculated by measuring the distance between a point present on both sides already and its contralateral point after reflection (and before averaging).

6. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(reflect$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


Unified landmarks after reflecting landmarks missing on one side across the midline.
7. Save the reflected landmarks to a text file.

> write.table(reflect$lm.matrix, file="Landmarks 3D reflected/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Aligning landmarks to the midline


The 3D landmark and curve points resulting from the previous steps of reconstruction and unification will be arbitrarily positioned and oriented in 3D space. For visualization purposes it’s often desirable to align a set of landmarks to the midline plane so that multiple objects can be viewed in a consistent orientation.

This can be done with the StereoMorph function alignLandmarksToMidline(). This function translates and rotates the points so that the midline points are aligned with the xy-plane. If bilateral landmarks were averaged in the reflection step, the midline points will lie exactly in the midline plane (i.e. they will have z-values of zero). Aligning landmarks to the midline plane does not change the scaling of the landmarks or the position of any landmarks relative to one another.

1. Load the StereoMorph package, if not already loaded, and the ‘rgl’ R package for viewing 3D points. Also ensure that the StereoMorph Tutorial folder is your current working directory.

> nx <- library(StereoMorph)
> ny <- library(rgl)


2. Specify the file paths of a 3D point set of landmarks and/or curve points.

> lm_reflected <- paste0("Landmarks 3D reflected/obj.txt")

3. Import the landmarks as a matrix.

> lm.matrix <- readLandmarksToMatrix(lm_reflected, row.names=1)

4. Call reflectMissingLandmarks().

> align <- alignLandmarksToMidline(lm.matrix)

Or, if you have just completed the unification step, you can use unify_lm$lm.matrix.

> align <- alignLandmarksToMidline(reflect$lm.matrix)

Calling summary() will print the midline alignment errors (the distances between the midline points and the estimated midline plane). However, since the bilateral landmarks were averaged in the reflection step, all of the midline points were already aligned to the midline plane. Thus, all of the midline alignment errors are zero.

5. Plot the points using plot3d() from the ‘rgl’ package.

> pts <- na.omit(align$lm.matrix)
> r <- apply(pts, 2, 'max') - apply(pts, 2, 'min')
> plot3d(pts, aspect=c(r/r[3]), size=3)


The landmarks are now aligned along the midline and landmarks on opposite sides have the same coordinates except an equal and opposite value along the z-axis.

> align$lm.matrix['quadrate_jugal_L', ]
[1] -45.431501 -6.010435 16.372618
> align$lm.matrix['quadrate_jugal_R', ]
[1] -45.431501 -6.010435 -16.372618


6. Save the aligned landmarks to a text file.

> write.table(align$lm.matrix, file="Landmarks 3D aligned/obj.txt", quote=F, sep="\t", col.names=F, row.names=T)

Unified landmarks after aligning the landmarks to the midline.
In ten steps, we've constructed a stereo camera setup, photographed and digitized landmarks and curves on the specimen and created a 3D reconstruction of these landmarks and curves. I hope that you've found this tutorial helpful and easy-to-follow. Please feel free to leave questions, comments and suggestions.


Return to the main tutorial page: Collecting 3D shape data using StereoMorph
Go back to the previous step: Reconstructing 2D points and curves into 3D

Friday, April 11, 2014

Photographing an object

This is step 7 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section provides tips on how to photograph an object for shape data collection.

Recommended materials for this section:
  • A yard or less of black velvet

This is the step where the DLT method has a major advantage over other 3D morphometric methods. Once the cameras are calibrated, the number of objects that can be photographed is only limited by the time it takes to position and photograph each object.

It is best to have a uniform background that provides good contrast to your specimen. First, this can decrease the photo size by as much as half (encoding a large black space takes up less space than a multi-colored, noisy background). If you’re taking several hundred photographs this is advantageous for data storage. Second, it’s easier to discern points on the edge of the specimen when the edge is clearly distinguishable from the background. For light-colored specimens, black velvet works well. The cheapest stuff available at fabric stores works great and only costs about $10 a yard.

A shell on black velvet. Black velvet works great as a solid, black background.
If you need to collect landmarks from several different places on an object that are not visible in a single camera view, reposition the object a few times, taking a photograph from both camera views each time. For the StereoMorph functions, these different orientations of the object are referred to as “aspects”.

The tutorial data set contains landmarks and curves from these three different aspects of a Canada Goose skull. The first aspect provides views of the ventral aspect (underside) of the skull.

First aspect for digitizing landmarks on the ventral side of the skull. The left and right images are the first and second camera views. The specimen is in the same position in both images, just viewed from different perspectives.
From the second aspect, landmarks on the lateral or side aspect of the skull can be digitized.

Second aspect for digitizing landmarks on the ventral aspect of the skull.
And the third aspect offers views of the back of the skull.

Third aspect for digitizing landmarks on the lateral aspect of the skull.
Depending on the data you want to collect you might be able to get away with a single image of each specimen, in which case landmarks and curves only have to be digitized in two images. If you have multiple aspects, you will need some overlap in landmarks among the images (at least three, preferably five to six) in order to combine all of the points into a single 3D point set (detailed in the section “Unifying, Reflecting and Aligning”). However, you don't have to digitize the same landmarks in every aspect.

Lastly, sometimes it's necessary to use a reference marker if it's difficult to find the exact same point between two views. I do this when digitizing beaks with a broad, flat tip. If using museum specimens, one should of course use tape that does not leave a residue.

No-residue tape can be used for points that are difficult to identify exactly in two different views.
Go to the next step: Digitizing photographs
Go back to the previous step: Testing the calibration accuracy

Testing the calibration accuracy

This is step 6 of 10 in the tutorial Collecting 3D shape data using StereoMorph

The previous section demonstrated how to use a checkerboard pattern of known square size and the dltCalibrateCameras() function to calibrate two cameras in stereo. While dltCalibrateCameras() returns calibration and reconstruction RMS errors, these are measures of how well the DLT camera model fits the calibration points and not the reconstruction accuracy per se. Moreover, the RMS errors are scale-independent. If we have not measured the calibration checkerboard square size correctly, the RMS errors will be unaffected but our reconstructions will be improperly scaled. This section will demonstrate how to use a checkerboard (ideally having a square size different from that used in the calibration step) to test the accuracy of a calibrated stereo camera setup.

1. Repeat the steps in “Creating a Checkerboard Pattern” to create another checkerboard of a different square size than that used in the calibration. This will allow you to test whether you have the proper scaling for the calibration checkerboard (if they had the same square size, we would not be able to test this). For this tutorial, I used a checkerboard printed at 9% scaling for the calibration and a checkerboard printed at 6% scaling to test the calibration accuracy.

Two checkerboards, printed at 9% and 6% scaling. In this tutorial, the cameras are calibrated with the left pattern and the calibration is tested with the right.
2. Repeat the steps in Measuring Checkerboard Square Size to measure the square size of the test checkerboard pattern. The 6% scaled test checkerboard in this tutorial has a square size of 4.233 mm when measured using a precision rule.

Measuring the square size of the test checkerboard pattern (21 x 14, printed at 6% scaling).
3. Take photographs of the new checkerboard pattern as in Calibrating Stereo Cameras , ensuring a sampling of points throughout the calibrated volume (anywhere the object you're digitizing could conceivably be).

Eight photos of the test checkerboard (a different size than the calibration checkerboard) per camera view used in the test calibration step. The tutorial includes a test checkerboard in a total of 11 different positions.
4. Import the photographs from different views into two different folders. For demonstration, we’ll use the test calibration photographs in the folder ‘Test Calibration images’ (in the StereoMorph Tutorial folder). Photographs from camera one are in the folder ‘v1’ and photographs from camera two are in the folder ‘v2’.

5. Load the StereoMorph library into the current R session and ensure that the StereoMorph Tutorial folder is your current working directory.

> library(StereoMorph)

Just as in the calibration step, we’ll use the findCheckerboardCorners() function to find the internal corners in each calibration image.

6. Specify the number of internal corners in the checkerboard.

> nx <- 21
> ny <- 14


7. Specify the file locations of the test calibration images, where to save the checkerboard corners and, if desired, where to save the verification images.

> test_image_file <- paste0('Test images/v', c(1, 2))
> test_corner_file <- paste0('Test corners/v', c(1, 2))
> test_verify_file <- paste0('Test images verify/v', c(1, 2))


8. Call findCheckerboardCorners().

> corners <- findCheckerboardCorners(image.file=test_image_file, nx=nx, ny=ny, corner.file=test_corner_file, verify.file=test_verify_file)

The function will find the corners successfully for all but three of the images in the tutorial set. Again, don’t worry if the function fails for a couple of images; these will be ignored in subsequent steps.

9. Once findCheckerboardCorners() has finished running, check all of the verification images to be sure that the order is consistent within each camera view and between the two views. For the tutorial images the corners in the first view are in the reverse order relative to the order in the second view (as was also seen with the calibration images). This will be corrected when the corners are read in from the corner files.

10. Import the corners just saved to individual files by constructing a two-column matrix of file paths, where the columns correspond to each view.

> corners_by_file <- cbind(
paste0(test_corner_file[1], '/', paste0('DSC_00', 11:20, '.txt')),
paste0(test_corner_file[2], '/', paste0('DSC_00', 11:20, '.txt')))


11. Call readCheckerboardsToArray() to read all of the corner matrices into an array; include all files – empty files will read in as NAs.

> test_corners <- readCheckerboardsToArray(file=corners_by_file, nx=nx, ny=ny, col.reverse=c(F, T), row.reverse=c(F, T))

As in the calibration step, the vector c(F, T) is passed to col.reverse and row.reverse to reverse the order of the corners from the second view relative to the first.

Since we are importing corners from 10 images, the third dimension of the test_corners array is 10.

> dim(test_corners)
[1] 294 2 10 2


12. Set grid_size_test to the square size of the test calibration checkerboard. The tutorial test calibration checkerboard, measured using a precision ruler (see Measuring Checkerboard Square Size ), is approximately 4.323 mm.

> grid_size_cal <- 4.2327

13. Load in the calibration coefficients.

> cal.coeff <- as.matrix(read.table(file="cal_coeffs.txt"))

14. Call dltTestCalibration(). Image pairs for which corners were found in neither or only one image will be ignored.

> dlt_test <- dltTestCalibration(cal.coeff=cal.coeff, coor.2d=test_corners, nx=nx, grid.size=grid_size_test)

15. Use the summary() function to print a summary of the accuracy test.

> summary(dlt_test)

dltTestCalibration Summary
    Number of grids: 7
    Number of points: 1029
    Aligned ideal to reconstructed (AITR) point position errors:
        AITR RMS Errors (X, Y, Z): 0.02343, 0.01799, 0.02063
        Mean AITR Distance Error: 0.03297
        AITR Distance RMS Error: 0.03605
    Inter-point distance (IPD) errors:
        IPD RMS Error: 0.03090397
        IPD Mean Absolute Error: 0.02392322
        Mean IPD error: -0.002694208
    Adjacent-pair distance errors:
        Mean adjacent-pair distance error: 0.0004335479
        Mean adjacent-pair absolute distance error: 0.01592422
        SD of adjacent-pair distance error: 0.01840659
    Epipolar errors:
        Epipolar RMS Error: 1.594549 px
        Epipolar Mean Error: 1.555131 px
        SD of Epipolar Error: 0.352441 px


Now to unpack this summary. One of the challenges to assessing the accuracy of a DLT calibration is that any reconstructed points will be in the coordinate system of the points used to calibrate the cameras. This means that even if we had an object with points of known 3D position, reconstructing the object using the DLT coefficients would yield 3D points arbitrarily translated and rotated to somewhere in 3D space. We'd have to perform some alignment step in order to compare the reference 3D points to the reconstructed points. The alignment will cause underestimation of larger errors and overestimation of smaller errors. The best solution is to use a variety of different accuracy metrics that, when taken together provide a complete assessment of accuracy.

dltTestCalibration() provides four assessments of calibration accuracy:
  1. Aligned ideal to reconstructed (AITR) error. For every checkerboard that is reconstructed, the function takes an ideal checkerboard of the same dimensions (uniform square sizes and planar) and aligns the ideal corner points to the reconstructed corner points using least squares alignment. Then, the distance is measured between each ideal point and its corresponding reconstructed points. If the points were perfectly reconstructed, the ideal and reconstructed points would overlap perfectly. The “AITR RMS (root mean square) errors” are first measured along each axis (x, y and z in the coordinate system of the calibration points). This is one way to quantify how accuracy differs along different dimensions. The “mean AITR distance error” is the mean 3D distance between ideal and reconstructed points. This will usually be larger than any of the single axis errors since it incorporates error along all axes. This is also returned as RMS error. One disadvantage of this measure is that the ideal grid will be pulled toward areas of high error to minimize the total alignment error. This can cause underestimation of larger errors and overestimation of smaller errors.
  2. Inter-point distance (IPD) error. This summarizes distance rather than positional errors. For every reconstructed checkerboard random pairs of points (without re-sampling) are chosen and the distance between them is compared to the actual distance in an ideal grid (again, uniform square sizes and planar). This measure avoids the problems with the alignment step in AITR error but doesn't readily provide any information of error along a particular dimension (although this could perhaps be assessed by taking into account the positions of the reconstructed points). The distance errors are returned as “IPD RMS Error” and “IPD Mean Absolute Error”. The reconstructed distances can be either shorter or longer than the actual distance. The "Mean IPD error" takes the mean of these errors. If there is no bias toward over- or underestimation of distance this should be nearly zero. The results will differ slightly at each run because the point pairs are chosen randomly.
  3. Adjacent-pair distance errors. This is identical to IPD error except that randomly chosen points are adjacent on the grid. This means the ideal distances are uniform and the minimum possible distance for IPD error assessment. This is a common stereo camera error assessment used in the literature (e.g. Tashman & Anderst 2003; Brainerd et al. 2010). Since the points in each pair are uniformly close together, their mean position (the mid-point) can be used to look at how IPD error varies as a function of position in the calibration volume. These errors will usually be slightly less than the general IPD errors since error is likely to be greater for points at a greater distance from one another.
  4. Epipolar errors. In a stereo camera setup, a point in one camera view must fall along a line in a second camera view. This line is that point's epipolar line. The distance between a point's epipolar line and its corresponding point in that second camera view is the epipolar error. Since the input to dltTestCalibration() includes the same point in two or more camera views, we can use epipolar error to assess calibration accuracy. Epipolar error must be low to identify corresponding points along curves in different views. The mean and standard deviation of epipolar error is returned in pixels and should be less than a couple of pixels.
So how accurate is the calibration? Millimeters were used as units in measuring the checkerboard square size so all values not in pixels are in millimeters. The positional errors along all three dimensions are around 20 microns on average and the different distance errors range from 16-36 microns. The "mean IPD error" shows a slight bias towards underestimating inter-point distances but only by 3 microns on average. And for points closer together, this bias is less than a micron (0.0003 mm). Lastly, the epipolar error is less than two pixels on average, with a low standard deviation.

It's important to compare these errors to the total calibrated volume. Since the calibration checkerboard is 21 x 14 and the square size is approximately 6.36 mm, each dimension of the calibrated volume is at least 89 mm (14*6.36). 20 microns represents 0.02% positional error (0.020 mm/89 mm), an extremely low error.

dltTestCalibration() returns all of the values used to calculate the stats in the summary output. So plot() and hist() can be used to look at the full error values. For instance, we can look at a histogram of all the IPD errors.

16. Create a histogram of all the inter-point distance errors, using hist().

> hist(dlt_test$ipd.error, breaks=20)
A histogram of inter-point distance errors (in mm); N = 1029.
The histogram shows that nearly all of the distances measured between random pairs of points across each checkerboard are within 0.100 mm of their actual distance.

17. Create a histogram of all the inter-point distance errors, considering only adjacent internal corners (in this case, points within about 4 mm of each other).

> hist(dlt_test$adj.pair.ipd.error, breaks=20)
A histogram of adjacent inter-point distance errors (in mm); N = 980.
When considering only random pairs of adjacent points on each checkerboard, nearly all of the inter-point distances are within 0.040 mm of their actual distance.

18. To test how reconstruction error varies as a function of the distance from the center of the calibrated volume (i.e. do reconstructed points in the periphery of the calibrated space have a higher error than points in the middle?), plot the adjacent inter-point distance as a function of the distance of each adjacent pair from the centroid of all adjacent pairs (an approximate center of the calibrated volume).

> plot(dlt_test$adj.pair.centroid.dist, dlt_test$adj.pair.ipd.error)
IPD error for adjacent points versus the mean position of adjacent pairs along the z-axis.
There is no obvious trend in the plot, indicating that reconstruction error is uniform throughout the calibrated volume.

19. To test how reconstruction error varies as a function of the position along a particular axis, plot the adjacent inter-point distance as a function of the mean position of each adjacent pair along an axis.

> plot(dlt_test$adj.pair.mean.pos[, 3], dlt_test$adj.pair.ipd.error)
IPD error for adjacent points versus the distance of the adjacent pairs from the centroid.
There is also no obvious trend in this plot, providing a second confirmation that reconstruction error is uniform throughout the calibrated volume. The axes here are defined in the same coordinate system as the 3D calibration coordinates estimated in the calibration step. Thus, the orientation of these axes relative to the cameras is arbitrary and will depend on the orientations of the checkerboard patterns used in the calibration.

Now that the cameras are accurately calibrated, the next section will provide instructions on photographing an object for the collection of shape data.

Go to the next step: Photographing an object
Go back to the previous step: Calibrating stereo cameras

Tuesday, April 8, 2014

Calibrating stereo cameras

This is step 5 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section demonstrates how to photograph the checkerboard created in Creating a Checkerboard Pattern in different positions and orientations and use these photos to calibrate two cameras arranged in stereo.

Example of checkerboard photographed in different positions for stereo camera calibration.
Camera calibration using the direct linear transformation (DLT) method requires two sets of points: (1) a set of 3D coordinates and (2) their corresponding 2D pixel coordinates in each camera view. These two point sets are used to calculate calibration coefficients. These coefficients can then be used to reconstruct any point in 3D given its 2D pixel coordinates in at least two camera views. DLT calibration has traditionally been done using a "calibration object", typically a 3D box-shaped structure filled with markers at known 3D positions.

Example calibration object. From XROMM Wiki at wiki.brown.edu.

First, designing and building a 3D calibration object is not trivial. The object must have several points of known 3D position relative to one another and a sufficient number of these points, filling the 3D volume, should be visible from a single camera view. Secondly, to achieve high accuracy, these objects must be made using high precision machining. Thirdly, the calibration points must be digitized manually every time the cameras are calibrated.

Rather than use a 3D calibration object photographed in a single position, StereoMorph performs a DLT calibration using a 2D checkerboard pattern photographed in several positions and orientations. Photographs from two camera views of a planar checkerboard in a single orientation are insufficient to calibrate cameras in stereo because all of the checkerboard points lie in a single plane. Photographing a checkerboard in several different positions throughout the volume solves this problem by providing a sample of points throughout the volume. However, this poses a new problem. Each plane of points is in a different coordinate system; the 3D position and orientation of one plane relative to another is unknown.

StereoMorph solves this by using the nlminb() function (in the R package ‘stats’) to estimate the six transformation parameters (three translation, three rotation) required to transform the first checkerboard into each subsequent checkerboard in 3D space. The transformation parameters that minimize the calibration error are chosen as the optimal parameters. These transformation parameters are then used to generate the 3D coordinates needed to compute the DLT calibration coefficients.

1. Once you have arranged two cameras in a stereo setup (see the previous section), photograph the checkerboard pattern in several different positions and orientations within the volume to be calibrated (the volume in which the camera views overlap).

Eight photos of the checkerboard per camera view used in this tutorial for camera calibration.

In the image above the first camera view corresponds to the camera positioned on the floor (in the sketch to the right) and the second camera view corresponds to the camera sitting on the top of the table. Note in particular that the second camera is viewing the checkerboard pattern upside-down relative to the first camera.

2. Import the photographs from different views into two different folders. For demonstration, we’ll use the calibration photographs in the folder ‘Calibration images’ (in the StereoMorph Tutorial folder). Photographs from camera one are in the folder ‘v1’ (view 1) and photographs from camera two are in ‘v2’. For the rest of this tutorial, views 1 and 2 will be used to refer to photographs taken from camera 1 and 2, respectively.

The camera arrangement used in this tutorial.

3. Load the StereoMorph library into the current R session and ensure that the StereoMorph Tutorial folder is your current working directory.

> library(StereoMorph)

We’ll use the findCheckerboardCorners() function, introduced in the ‘Auto-detecting Checkerboard Corners’ section, to find the internal corners in each calibration image.

4. First, specify the number of internal corners in the checkerboard.

> nx <- 21
> ny <- 14


5. Then, specify the file locations of the calibration images, where to save the checkerboard corners and, if desired, where to save the verification images showing the automatically detected corners.

> image_file <- paste0('Calibration images/v', c(1, 2))
> corner_file <- paste0('Calibration corners/v', c(1, 2))
> verify_file <- paste0('Calibration images verify/v', c(1, 2))


Since the images are in two different folders (‘v1’ and ‘v2’), the paste0() function is used to create a vector (example below) for the two different folders within ‘Calibration images’.

> paste0('Calibration images/v', c(1, 2))
[1] "Calibration images/v1" "Calibration images/v2"


The findCheckerboardCorners() will automatically assign save-as filenames.

6. Call findCheckerboardCorners(). Note that this function can take several seconds per photo to run.

> corners <- findCheckerboardCorners(image.file=image_file, nx=nx, ny=ny, corner.file=corner_file, verify.file=verify_file)

By default, the findCheckerboardCorners() function will print its progress to the R console as each image is loaded and whether or not the specified number of internal corners were found (note that if the number of internal corners found does not match the number specified, no corners are saved).

Loading image 1 (DSC_0002.JPG)...
    294 corners found successfully.
Loading image 2 (DSC_0003.JPG)...
    294 corners found successfully.
...


The function will find the corners successfully for all of the images in the tutorial set. When using your own images don’t worry if the function fails for a couple of images; these will be ignored in subsequent steps. Four to five pairs of calibration images are usually sufficient for an accurate calibration.

7. Once findCheckerboardCorners() has finished running, check all of the verification images to be sure that the order is consistent within each camera view and between the two views.

The checkerboard in two different positions from the same camera view (view 1)

Within the same view, the corners will be returned in the same order as long as the orientation of the checkerboard does not change radically within the same set. However, in the case of this calibration the order is not the same between the two views.

An image pair from two different camera views

Although it looks like the corners in view 1 and 2 have been returned in the same order, they are actually in the reverse order. Camera 1 is viewing the checkerboard head-on while camera 2 is viewing the checkerboard from the top. The point circled in red in view 2 and closest to the red mat on the table top is the same as the point circled in blue in view 1. We can easily fix this when we import in the corners using the readCheckerboardsToArray() function.

8. Import the corners just saved to individual files by constructing a two-column matrix of file paths, where the columns correspond to each view. The file paths must be specified as a matrix because the readCheckerboardsToArray() function will use the matrix structure to separate the corners by view.

> corners_by_file <- cbind(
paste0(corner_file[1], '/', paste0('DSC_000', 2:9, '.txt')),
paste0(corner_file[2], '/', paste0('DSC_000', 2:9, '.txt')))


9. Call readCheckerboardsToArray() to read all of the corner matrices into an array; include all files – empty files will read in as NAs. The array will have four dimensions: the first two dimensions correspond to the rows and columns of each corner matrix (294 x 2), the third corresponds to the number of positions of each checkerboard (8) and the fourth corresponds to the number of camera views (2).

> corners <- readCheckerboardsToArray(file=corners_by_file, nx=nx, ny=ny, col.reverse=c(F, T), row.reverse=c(F, T))

In passing the vector c(F, T) to the col.reverse parameter (‘F’ for FALSE and ‘T’ for TRUE), the function will maintain the current order of the corners in the first view and reverse the column order of the corners in second view. The same is done for the row order via the row.reverse parameter. This reverses the order of all the corners for each image from the second camera view. This is essential – the same row in each corner matrix must correspond to the same point across all of the images (among and between views).

10. Set grid_size_cal to the square size of the calibration checkerboard. The tutorial calibration checkerboard, measured using a precision ruler (see “Measure Checkerboard Square Size”), is approximately 6.365 mm.

> grid_size_cal <- 6.3646

11. Call dltCalibrateCameras(). This can take from 30 seconds to five minutes depending on the number of images and the speed of your computer. We’ll just use the first six image pairs of the checkerboard. Image pairs for which corners were found in neither or only one image will be ignored.

> dlt_cal_cam <- dltCalibrateCameras(coor.2d=corners[, , 1:6, ], nx=nx, grid.size=grid_size_cal, print.progress=TRUE)

The function begins by reducing the grid point density.

Reduce grid point density (12 total)
    1) Aspect 1, View 1; Mean fit error: 0.45 px; Max: 1.78 px
    2) Aspect 1, View 2; Mean fit error: 0.65 px; Max: 2.85 px
...


This fits a camera perspective model to the checkerboard corner points and reduces this corner set to nine points (3x3). These nine points contain the same information as the original number of points (here, 293), but allows the optimization function to proceed much more quickly. The mean fit error should be less than a pixel.

The function then searches for the optimal parameters to transform each checkerboard, yielding a set of 3D coordinates that results in the lowest calibration error.

Full Transform RMSE Minimization
Number of parameters: 30
Number of points: 54


The first checkerboard (or grid) is fixed in 3D space at (0, 0, 0). dltCalibrateCameras() then searches for a set of six parameters per grid (three for translation and three for rotation) to transform each grid relative to the first. Here, we have six image pairs corresponding to a grid in six different orientations. Because the first grid is fixed, the six transformation parameters are only needed for five grids. This means a total of 30 (6 times 5) parameters must be estimated. Since the point density was reduced to nine per checkerboard, there are 54 total points (9 times 6).

The minimization begins with three grids. Once the minimization converges to an appropriate solution, the minimization adds another grid, using the previously optimized parameters as starting parameters. This proceeds sequentially until all of the input checkerboards have been included (the sequential addition of grids helps in proper convergence).

12. Use the summary() function to print a summary of the calibration run.

> summary(dlt_cal_cam)

dltCalibrateCameras Summary
    Minimize RMS Error by transformation
        Total number of parameters estimated: 30
        Final Minimum Mean RMS Error: 0.885
        Total Run-time: 35.97 sec

    Mean Reconstruct RMSE: 0.414
    Calibration Coefficient RMS Error:
        1: 0.666
        2: 0.66
    3D Coordinate Matrix dimensions: 1764 x 3


The reconstruction and calibration error are decent proxies for the accuracy of the calibration; an RMS error less than one is good. If the RMS error is greater than one, there might be a problem somewhere in the calibration. Testing the accuracy (covered in the next section) can show this more definitively.

The function returns the calibration coefficients in cal.coeff:

> dlt_cal_cam$cal.coeff
               [,1]          [,2]
 [1,]  2.772121e+01 -2.557153e+01
 [2,] -4.918245e-02 -5.183902e+00
 [3,]  6.226293e+00  1.498422e+00
 [4,]  2.329910e+03  2.187563e+03
 [5,]  1.128266e+00 -9.899814e-01
 [6,] -2.795001e+01  7.701885e+00
 [7,] -1.246494e+00  2.450539e+01
 [8,]  1.608486e+03  6.728743e+02
 [9,] -3.854642e-06 -4.570336e-05
[10,] -5.200086e-04 -2.396548e-03
[11,]  2.593912e-03  1.183541e-03


Note that there are 11 rows and two columns. The 11 rows correspond to the 11 DLT coefficients and the two columns correspond to the two camera views. This 11x2 matrix can be used to reconstruct any point, given its 2D pixel coordinates in both camera views. That is, the calibration coefficients are the only values we need from this section to perform the rest of the steps in this tutorial.

13. Save the calibration coefficients to a text file.

> write.table(x=dlt_cal_cam$cal.coeff, file="cal_coeffs.txt", row.names=F, col.names=F, sep="\t")

The next section will detail how to determine the accuracy of the calibration.

Go to the next step: Testing the calibration accuracy
Go back to the previous step: Arranging the cameras

Saturday, April 5, 2014

Auto-detecting checkerboard corners

This is step 2 of 10 in the tutorial Collecting 3D shape data using StereoMorph

Given a photograph of a checkerboard pattern,


the findCheckerboardCorners() function in the StereoMorph package (v1.2 and higher) can automatically detect the internal corners of the checkerboard


and return these as a series of pixel coordinates to sub-pixel resolution (currently, this function only works with JPEG images).

              [,1]     [,2]
    [1,]  575.1566 387.9233
    [2,]  755.2105 395.1164
    [3,]  935.8325 402.5703
    [4,] 1115.8718 411.0474
    ...

Automated detection of the corners of a checkerboard pattern in a photograph will be used multiple times in this tutorial: to measure the size of checkerboard squares in real-world coordinates, to calibrate a set of cameras in stereo and to test the calibration accuracy. Automated corner detection can also be used to scale photographs for tasks such as 2D morphometrics, eliminating the need to manually digitize a series of points along a ruler.

1. Start by specifying the number of internal corners in the checkerboard. Note that the number of internal corners is not the number of squares but rather the number of corners where black squares adjoin one another. All the checkerboards used in this tutorial have 294 internal corners arranged in a 21 x 14 grid.

> nx <- 21
> ny <- 14

Which number you assign to nx versus ny is arbitrary, so long as you are consistent throughout.

2. Specify the location of the image to be read, the location where the corners should be saved and where to save a “verify image”. The verify image is a copy of the input image with the corners drawn in so that you can check that the correct corners were found and determine in what order they were read (the corner.file and verify.file arguments are optional).

> image.file <- 'Calibration images/v1/DSC_0002.JPG'
> corner.file <- 'Calibration corners/v1/DSC_0002.txt'
> verify.file <- 'Calibration images verify/v1/DSC_0002.JPG'

3. Call findCheckerboardCorners().

> corners <- findCheckerboardCorners(image.file=image.file, nx=nx, ny=ny, corner.file=corner.file, verify.file=verify.file)

Since print.progress argument to findCheckerboardCorners() is TRUE by default, the progress of the function will be returned to the R console. Since corner detection can take several seconds per image, this allows users to track the progress of the function. Whether the expected number of internal corners were found will also be reported to the R console.

You might have images for which findCheckerboardCorners() was unable to find the corners either because of lighting issues or because the checkerboard was at a very oblique orientation. If these are the calibration images, don’t worry – the corners do not have to be found in every image pair for an accurate calibration (at least five images should provide a good calibration). The unsuccessful images will be ignored in subsequent steps. Note that the function assumes, by default, that the checkerboard squares in the image have a perimeter of at least 140 pixels (35 x 35 pixels).

The corners are returned and saved into the variable corners as a matrix with 294 rows and 2 columns. The corners are found to sub-pixel resolution by using information from a 23 x 23 square surrounding each corner to find a more exact corner point.

> corners
          [,1]     [,2]
[1,]  575.1566 387.9233
[2,]  755.2105 395.1164
[3,]  935.8325 402.5703
[4,] 1115.8718 411.0474
...


The order of the returned corners is important for calibrating cameras in stereo and for testing their accuracy. Between the two views and from photo to photo, the corners must be found in the same order. In this way, a corner in a particular row of the matrix will always correspond to the same corner on the checkerboard.

findCheckerboardCorners() will nearly always return the corners in the same order from photo to photo as long as the checkerboard is imaged in a similar orientation. The function looks for the top-left corner relative to the center of the checkerboard. The corners are then ordered first along the nx dimension and secondly along the ny dimension (this is why it is important that nx differs from ny ). This can be verified by checking the verification image (example below) in which the first corner is indicated by a red circle, the last by a blue circle and all intermediate corners are connected by a green line (easy to remember as RGB).
The verification image indicating the found corners and the order in which they were found. The first corner is circled in red, the last in blue with intermediate points connected by green lines.
4. To find the checkerboard corners in multiple images, specify either a folder or a vector of files for image.file, corner.file and verify.file.

> image.file <- 'Calibration images/v1'
> corner.file <- 'Calibration corners/v1'
> verify.file <- 'Calibration images verify/v1'


The function will read all of the images in the folder(s) specified by image.file and the names of the images will be used to name the corner and verification images (with the proper file extensions). In this case, all files in the image.file folder(s) must be images.

5. Then call findCheckerboardCorners() just as before.

> corners <- findCheckerboardCorners(image.file=image.file, nx=nx, ny=ny, corner.file=corner.file, verify.file=verify.file)

Since 8 images are in the image.file folder, the function outputs an array of corners with the dimensions 294 x 2 x 8. The first matrix of corners can be obtained as follows.

> corners[, , 1]
          [,1]     [,2]
[1,]  575.1566 387.9233
[2,]  755.2105 395.1164
[3,]  935.8325 402.5703
[4,] 1115.8718 411.0474
...


In the next step we’ll use findCheckerboardCorners() to measure the printed size of the squares in a checkerboard pattern.

Go to the next step: Measuring checkerboard square size
Go back to the previous step: Creating a checkerboard pattern