Saturday, August 23, 2014

Arranging the Cameras

This is step 4 of 10 in the tutorial Collecting 3D shape data using StereoMorph

This section covers key aspects on how to arrange cameras in a stereo setup.

Materials needed for this section:
  • 2 cameras (preferably DSLR cameras with minimal distortion lenses)
  • 2 camera remotes
  • 2 sturdy tripods
  • Masking or colored tape

General considerations:

Whether you’re collecting landmarks or curves, the number of cameras you include, the lenses that you use and how you position the cameras depends on what you want to collect landmark and curve data from.

Here are some general principles:
  1. The views among the cameras must overlap. Since 3D reconstruction requires the pixel coordinates of a point in at least two cameras, the point must be visible in at least two camera views.
  2. Theoretically, there is a trade-off between the ease of digitizing and reconstruction accuracy. For instance, if the angle between two camera views in a stereo setup is 90 degrees,
    Two-camera stereo setup with the cameras at 90 degrees relative to one another.
    you will have high reconstruction accuracy (together, the two views give you full information on a point’s position along all three axes) however the views will be so divergent that it will be difficult to identify the same point in both views. A point visible in one view may not even be visible in the other. If the angle between two cameras is reduced to around 20 degrees
    Two-camera stereo setup with the cameras at about 20 degrees relative to one another. I've found this works great for landmarks but not well for curves.
    it’s much easier to find the same point in both views (the views are nearly the same), however these slight differences in position are now the only information available on the point’s position along the depth axis (orthogonal to the image planes). In practice, I’ve found that cameras positioned with a small angle relative to one another still provide high reconstruction accuracy for landmarks but don't work well for curves. It’s best to start with the cameras as close together as possible (more convergent views), test the accuracy and make the views more divergent if the accuracy is worse than what you’re willing to accept.
  3. The volume of space visible in both cameras should be large enough to contain the object(s) and landmarks or curves you’ll be digitizing. You might have to flip the object around a couple of times (to get the opposite side, for instance). If not all of the object is visible, you’ll have to digitize sections of the object separately and then assemble the point sets based on overlapping points, which requires more time digitizing.
  4. The cameras must not move for all of data collection and calibration. The cameras can be calibrated before or after data collection but throughout and between these steps the cameras must remain in the exact same position. Because the camera is often positioned half a meter or more away from the object, a sub-millimeter shift of the camera can translate into a large shift in the image frame, causing rather large inaccuracies.
  5. The focal length (zoom) and focus of the lens must not change for all of data collection and calibration. The calibration is specific to a particular focal length and focus; thus, the cameras will have to be re-calibrated if either of these changes.
Given all of these caveats, the best course of action is to try out several different camera setups and test the accuracy fully before collecting any data (testing the calibration accuracy is detailed in the “Testing the Calibration Accuracy” section). Although your own setup might differ from the camera setup used in this tutorial, it will at least provide an example of one possible setup and how to accommodate the considerations above.

For this tutorial, the cameras were arranged as shown below on the left, using a tabletop to position the calibration grid and objects being photographed. The tabletop can also be used to position lights around the object, if desired.

In this tutorial, all stereo camera images were taken with Nikon cameras, fitted with AF-S DX Nikkor 18-55 mm lenses at a focal length of 55 mm. At a focal length of 55 mm, distortion is nearly undetectable. It is essential that the lenses have minimal distortion. About 50 cm away from the tip of the lens, the checkerboard pattern almost nearly fills the image frame. This means that during the calibration step, we can fully sample the calibration volume with fewer calibration images.
A sample calibration image from one camera in a stereo camera setup.
If you position a tripod on a smooth surface, such as a table top, put small rubber squares under each tripod foot to keep the tripod from slipping.
A small piece of rubber under a tripod leg can keep it from slipping on a smooth surface.
1. Before calibrating or starting to collect data, attach small pieces of tape to a surface in the calibration space.

Add tape to a fixed surface in the camera view to both box the calibration space and test whether the cameras move during data collection or calibration.
This serves both to remind you where the calibrated volume is when positioning objects and it also to test whether the cameras have shifted during data collection.

2. Take photos of the tape frame before beginning and after having taken all of the photos. If the images are identical then the cameras have not shifted significantly.

3. Make sure that all connections/screws in the tripod and between the tripod and the camera are tight. This reduces the possibility of any motion of the cameras during data collection.
Ensure tight connections in the tripod and between the tripod and camera.
It’s best to use a remote (wireless or cord depending on the Nikon model) to release the shutter so you minimizing touching the shutter button on the cameras as much as possible.
Shutter remotes lessen the chances of the cameras moving during data collection.
I’ve found that pressing buttons on the camera lightly (such as for reviewing photos) doesn’t cause significant movement of the cameras but pressing the shutter button requires more force and doing it repeatedly causes the cameras to move significantly over a series of photographs.

4. If your lens has vibration reduction (VR) or automatic focus, be sure to turn both of these off. Vibration reduction uses a small gyroscope in the lens to compensate for camera motion and thus reduce blur. The spinning and stopping of the gyroscope can cause the image frame to shift randomly while taking photos.

Turn off auto-focus and vibration reduction, if applicable.
5. Set the cameras to the smallest aperture (this is the largest f value).
A smaller aperture is ideal because it increases depth of field. Without increasing the lighting, the exposure time will increase.
The smaller the aperture, the greater the depth of field (i.e. the more things are in focus both close and far away from the camera). This is essential in a stereo camera setup because in order to digitize points accurately throughout the calibration volume they must be in focus.

In the next step, we’ll photograph the checkerboard we made in step 1 in different positions and orientations within the stereo camera setup and use these images to calibrate the cameras.

Arranging cameras for curve reconstruction

Collecting curve data using a stereo camera setup requires an extra consideration when arranging the cameras. Landmarks digitized in two different camera views are reconstructed in StereoMorph under the assumption that the same point in 3D space is digitized in both views. In principle, curves can be reconstructed in the same way, by breaking the curve down into a series of landmarks.

There is one complication to this, however. A point halfway along the curve in one camera view is not necessarily the same point as a point halfway along the curve in another camera view. This is due to the perspective effect of lenses. The depth of a 3D curve in one view dictates how it is projected into that image plane. Since the depth of a curve will differ depending on the perspective from which it is viewed, the same curve will be projected differently into different views.

The same curve digitized in two different camera views. Although not immediately obvious, a point halfway along the curve in one view is not necessarily the same point in 3D space as a point halfway along the curve in another.

When the two cameras are calibrated, however, it’s possible to use the calibration to identify corresponding points on the same curve in two camera views and reconstruct these corresponding points just as with landmarks. This is done using epipolar geometry.

The basis of epipolar geometry is that any point in one camera view must fall along a line in another camera view. This line is the epipolar line. The intersection of the epipolar line and the curve in the second view can be used to find the corresponding point on the second curve.
Demonstration of epipolar geometry. The point, in camera view 1 (indicated by a yellow arrow), must fall along a line in camera view 2. This line is the point’s epipolar line and can be used to identify corresponding points along two curves.

But what if the epipolar line is parallel to the curve in the second view?
    
The epipolar line (yellow dashed line in right image) of the point in the first camera view (yellow point in left image) is parallel to the curve in the second view along a considerable portion of its extent. Absent other information, this makes it impossible to identify the corresponding point.
Without any additional information, this makes finding the corresponding curve point in the second view impossible. At some point, StereoMorph might include the ability to compare features along the curve in the image itself to find the corresponding point. It’s best to arrange the specimen or cameras so as to minimize the chance of large sections of the curve being parallel to an epipolar line. This can be done by making a mental note of the following steps:
  1. For the curve of interest, identify a 2D plane in which most of the curve lies. Of course the curve will have some three-dimensional aspect to it (or else you would not be using 3D reconstruction). But find a plane that encompasses most of the curve.
  2. Position the cameras so that a line extending out each lens to the curve is orthogonal to this plane.
As an example, let’s say we’re photographing a curve that resembles a circle (shown in the diagram below).
If we position the cameras so that they are viewing the curve, shown as a circle here, from the side (top left), the curve will appear in the two camera images as a nearly straight line (top middle and right). There is no way we could adequately measure the true shape of the curve because we’re hardly seeing any of it in the images. Additionally, the curve points in the second view will be nearly parallel to the epipolar line of the points in the first view. In fact, these two properties are geometrically related such that if the camera views chosen do not adequately capture the major three-dimensional aspects of the curve, epipolar geometry will also be unable to accurately find the corresponding curve points.

In contrast, if the cameras are positioned so they are viewing the curve from the top (bottom left), the major aspects of the curve’s shape are represented in the images and the curve in the second image will seldom be parallel to the epipolar lines of points along the curve in the first image. As a consequence, StereoMorph currently works best for reconstructing curves that do not have an excessively 3D shape (i.e. 3D spirals) because no matter how the cameras are positioned, these shapes will always have large portions that cannot be adequately captured in both camera views.

No comments:

Post a Comment