Panoramic Imaging
Wiley, Chichester, 2008
by Fay Huang (Yi-Lan), Reinhard Klette (Auckland), and Karsten Scheibe (Berlin)
ABSTRACTS OF CHAPTERS
EXERCISES
LIBRARY AND HEADER FILES
VISUAL SAMPLES (high resolution)
ANIMATIONS (3D reconstructions)
Sensor-Line Cameras and Laser Range-Finders
Panoramic imaging is a progressive application and research area. This technology has applications in digital photography, robotics, film productions for panoramic screens, architecture, environmental studies, remote sensing and GIS technology. Applications demand different levels of accuracy for 3D documentation or visualizations. This book describes two modern technologies for capturing high-accuracy panoramic images and range data, namely the use of sensor-line cameras and laser range-finders. It provides mathematically accurate descriptions of the geometry of these sensing technologies and the necessary information required to apply them to 3D scene visualization or 3D representation. The book is divided into three parts:
- Part One contains a full introduction to panoramic cameras and laser range-finders, including a discussion of calibration to aid preparation of equipment ready for use.
- Part Two explains the concept of stereo panoramic imaging, looking at epipolar geometry, spatial sampling, image quality control and camera analysis and design.
- Part Three looks at surface modeling and rendering based on panoramic input data, starting with the basics and taking the reader through to more advanced techniques such as the optimization of surface meshes and data fusion.
- There is also this accompanying website containing high-resolution visual samples and animations, illustrating techniques discussed in the text.
Panoramic Imaging is primarily aimed at researchers and students in engineering or computer science involved in using imaging technologies for 3D visualization or 3D scene reconstruction. It is also of significant use as an advanced manual to practising engineers in panoramic imaging. In brief, the book is valuable to all those interested in current developments in multimedia imaging technology.
Chapters
Chapter 1 – IntroductionThis chapter provides a general introduction to panoramic imaging, mostly at an informal level. Panoramas have an interesting history in arts and multimedia imaging. Developments and possible applications of panoramic imaging are briefly sketched in a historic context. The chapter also discusses the question of accuracy, and introduces rotating sensor-line cameras and laser range-finders. |
|
![]() |
Chapter 2 – Cameras and SensorsThis chapter starts by recalling a camera model sometimes referred to in computer vision and photogrammetry: the pinhole camera. It also discusses its ideal mathematical and approximate implementation by means of a sensor-matrix camera. We recall a few notions from optics. Panoramic sensors are basically defined by the use of “panoramic mirrors”, or controlled motion of such a sensor-matrix camera, which may “shrink” into a sensor-line camera. We conclude with a brief discussion of laser range-finders as an alternative option for a panoramic sensor. |
![]() |
Chapter 3 – Spatial AlignmentsThis chapter considers the positioning of panoramic sensors or panoramas in 3D space, defining and using world or local (e.g., camera or sensor) coordinate systems. The chapter also specifies coordinates and locations of capturing surfaces or panoramic images. The chapter starts with a few fundamentals for metric spaces, coordinate system transforms, or projections from 3D space into panoramic images, briefly recalling mathematical subject areas such as linear algebra, projective geometry, and surface geometry. |
![]() |
Chapter 4 – Epipolar GeometryEpipolar geometry characterizes the geometric relationships between projection centers of two cameras, a point in 3D space, and its potential position in both images. A benefit of knowing the epipolar geometry is that, given any point in either image (showing the projection of a point in 3D space), epipolar geometry defines a parameterized (i.e., Jordan) curve (the epipolar curve) of possible locations of the corresponding point (if visible) in the other image. The parameter t of the curve (t) then allows “movement” in computational stereo along the curve when testing image points for actual correspondence. Interest in epipolar geometry is also motivated by stereo viewing.1 There is a “preferred” epipolar geometry which supports depth perception, and this is defined by parallel epipolar lines; these lines are a special case of epipolar curves. |
![]() |
Chapter 5 – Sensor CalibrationIn this chapter we discuss the calibration of a rotating panoramic sensor (rotating sensor-line camera or laser range-finder). Having specified some preprocessing steps, we describe a least squares error method which implements the point-based calibration technique (common in photogrammetry or computer vision, using projections of control points, also called calibration marks). This method has been used frequently in applications, and proved to be robust and accurate. The chapter also discusses three calibration methods at a more theoretical level, for comparing calibration techniques for the general multi-center panorama case.1 The aim of this discussion is to characterize the linear and non-linear components in the whole calibration process. As pointed out at the end of Chapter 3, the geometry of the LRF can be understood (for calibration purposes) as a special case of the geometry of the rotating sensor-line camera. Therefore we prefer to use camera notation in this chapter. However, a section at the end of the chapter also discusses the specific errors to be considered for a laser range-finder. |
![]() |
Chapter 6 – Spatial SamplingSpatial sampling describes how a 3D space is sampled by stereo pairs of images, without considering geometric or photometric complexities of 3D scenes. This chapter presents studies on spatial sampling for stereo panoramas which are a pair of panoramic images which only differ in the chosen value of the principal angle ω. Symmetric panoramas are a special case of stereo panoramas. Recall from our previous definition of symmetric panoramas that this is a pair of stereo panoramas whose associated principal angles sum to 2π. |
![]() |
Chapter 7 – Image Quality ControlIn this chapter, the camera is assumed to be a precisely defined parameterized model, and we do not apply any further constraints on camera parameters. Pinhole or multiple-projection-center panoramic cameras are examples of parameterized camera models. This chapter discusses image quality in the context of stereo data acquisition. For example, the number of potential stereo samples should be maximized for the 3D space of interest. The chapter introduces four application-specific parameters, namely the scene range of interest enclosed by two coaxial cylinders, where we have radius D1 for inner cylinder and radius D2 for outer cylinder, the imaging distance H1 (it also characterizes the vertical field of view), and the width θw of the angular disparity interval (θw specifies stereoacuity). Estimated values of these four parameters are required as inputs for the image quality control method which allows optimum sensor parameters R and ω to be calculated. |
![]() |
Chapter 8- Sensor Analysis and DesignThe control method, as provided in the previous chapter, does not allow solutions for all possible quadruples of four individual input values forD1 D2,H1, and θw. There are geometric constraints (e.g.,D1 < D2) or relations between these parameters which restrict the set of possible input values. Furthermore, such constraints or relations also restrict the set of all 6-tuples (D1,D2,H1, θw,R,ω). If R is only between 0 and 0.2 m, then this (eventually) results in constraints for the other five parameters. This chapter investigates dependencies and interactions between these six parameters. To be more precise, we analyze validity ranges, interrelationships, and characteristics of value changes for these six sensor parameters. The aim is to study what can be achieved with respect to camera (or sensor) analysis and design. |
![]() |
Chapter 9 – 3D Meshing and VisualizationSo far this book has discussed data acquisition and the photogrammetric interpretation of captured data. This chapter deals with fundamentals in 3D modeling and visualization. 3D computer graphics is more diverse, as briefly reported in this chapter. This book is not about 3D computer graphics, and this chapter is not intended as an introduction into this topic. This chapter assumes that the reader is already familiar with (basics of) 3D computer graphics, and discusses those techniques which are in particular used for LRF and panorama data. |
![]() |
Chapter 10 – Data FusionThe fusion of data sets (LRF data, panoramic images) starts with the transformation of coordinate systems (i.e., those of LRF and camera attitudes) into a uniform reference coordinate system (i.e., a world coordinate system). For this step, the attitudes of both systems need to be known (see Chapter 5). A transformation of LRF data into the reference coordinate system applies equation (9.6). The known 3D object points Pw (vertices of the triangulation) are given by the LRF system, and are textured with color information provided by the recorded panoramic images. Therefore, panoramic image coordinates are calculated in relation to object points Pw, and this is the main subject in this chapter. |
Exercises
The exercises are compatible with Microsoft Visual Studio 2005. Before downloading the exercises, download the following files (as one zip-file) which include the necessary (library and header files). Save these files in one directory; do not change the folder hierarchy for compatibility reasons.
Below we provide downloads (zip-files) corresponding to a particular exercise; unzip and copy the entire folder to the exercises folder in the code folder. Libraries and includes are absolute paths because they are mounted on to a general drive r:! Change this into a relative path as used from now.
Therefore, go to Projects->properties->C/C++->additional includes->general (e.g., r:includes to ../../includes) same for the libraries.
For dynamic link libraries (dll’s) go to My Computer”->properties->extended->settings->environment variables and change the variable path (add the dll path). After this you may have to reboot the computer.
Chapter 1
1.1. What kind of data are available when using a laser range-finder which scans 3D scenes incrementally (rotating horizontally around its axis, and scanning at each of these positions also vertically within a constant range of angles)? | |
![]() |
1.2. [Possible lab project] Implement a program which enables anaglyph images to be generated. Instead of simply using the R (i.e., red) channel of the left image and the GB (i.e., green and blue) channels of the right image for creating the anaglyph RGB image, also allow the R channel to be shifted relative to the GB channel during 3D viewing for optimum 3D perception. When taking stereo pairs of images (with a common digital camera), make sure that both optical axes are about parallel, and shift the camera just orthogonally to the direction of the optical axes. Corresponding to the distance between human eyes, the shift distance should be about 50–60 mm. However, also carry out experiments with objects (to be visualized in 3D) at various distances, and try different shift distances. Report on subjective evaluations of your program by different viewers (e.g., which relative channel shift was selected by which viewer). |
![]() |
1.3. [Possible lab project] Put a common digital camera on a tripod and record a series of images by rotating the camera: capture each image such that the recorded scene overlaps to some degree with that recorded by the previous image. Now use a commercial or freely available stitching program (note: such software is often included in the camera package, or available via the web support provided for the camera) to map the images into a 360◦ panorama. When applying the software, for the projection type select either “cylindrical’’ or “panorama’’. 1.4. [Possible lab project] Record a series of images by using the camera on a tripod but now do level the tripod. The rotation axis should tilted by some significant degree to the surface you want to scan. When taking the pictures go for some unique geometries (i.e., straight lines on buildings, windows or simple geometries). For this experiment it is not important to have a full 360◦ scan. |
Chapter 2
2.1. Calculate the hyper focal length for an ideal Gaussian optics with f =35mm and a diameter of 6.25mm for the entrance pupil with an acceptable circle of confusion of σ =0.025 mm. What is the geometric resolution (in millimeters) of a pixel at this distance? 2.1. Prove that the formula for calculating the horizontal field of view is also true for the case when R>0 and ω>0. |
|
![]() |
2.3. [Possible lab project] Repeat the experiment of Exercise 1.3, but now position the camera in such a way that the projection center is definitely not on the axis of the image cylinder (compare with Figure 2.11). Use for R the maximum possible distance (defined by your equipment). Stitch the images again with some (freely available) software and report on the errors that occurred during this process. Would it be possible to stitch those images more accurately by modeling the sensor parameters in your own stitching software? |
Chapter 3
3.1. Assume that a local left-handed XYZ Cartesian coordinate system needs to be mapped into a right-handed XYZ Cartesian coordinate system. What kind of affine transform allows this in general? | |
![]() |
3.1. Draw a right-handed XYZ coordinate system onto a box (e.g., of matches). Perform the following three motions:
Is motion (1) or motion (2) equivalent to motion (3)? Express your result in a general statement in mathematical terms. |
![]() |
3.3. [Possible lab project] Implement your own program which projects all your images of Exercise 1.3 onto a cylindrical surface. Compare and discuss differences between results using the available program and your own implementation. |
![]() |
3.4. [Possible lab project] Implement a program which projects any < 180◦ fraction of a cylindrical image (i.e., which is given as being on the surface of a straight cylinder; you may use a resulting image from Exercise 3.3., or a, normally 360◦, panorama from the net) onto a plane which is tangential to the cylinder. |
Chapter 4
4.1. Prove that Xc2=0 in equation (4.10). 4.2. [Possible lab project] The website for this book provides a symmetric panoramic stereo pair. As shown in this chapter, epipolar lines are just the image rows. However, this is an ideal assumption, and one line below or above should also be taken into account. Apply one of the stereo correspondence techniques, rated in the top 10 on the Middlebury stereo website http://vision.middlebury.edu/stereo/, to the symmetric stereo panorama provided, and estimate a sufficient length of the search interval for corresponding points (obviously, the full 360◦ panorama does not need to be searched for corresponding points), to ensure detection of corresponding points in the context of your recorded 3D scenes and the panoramic sensor used. |
Chapter 5
5.1. Assume an RGB CCD line camera with 10,000 pixels and a spacing of x =0.154mm between color lines; also assume a pixel size of τ = 0.007mm and an optics with f =35 mm. Calculate the color shift in pixels between RGB lines for a center and border pixel of the line, for different object distances (h = 5,000 mm, h = 15,000 mm, h=25,000 mm, and h=50,000 mm) for the following cases:
|
|
![]() |
5.2. Suppose that the the pixel positions of a multi-sensor-line camera (ω=±20◦) are measured with a collimator. The static axis of the manipulator (i.e., a turntable rotates the camera to allow illumination of a selected pixel) is vertical in 3D space, and called the α-axis. The second vertical axis is the β-axis (see Figure 5.3). Angles α and β measurements are given here for some pixels:![]()
|
![]() |
5.3. [Possible lab project] You may use any matrix-type image (e.g., which you used for one of the previous exercises). Implement a program which estimates the normalized cross correlation factor between two adjacent image columns. Shift these lines pixelwise up or down, such that a maximum correlation between resulting lines is achieved. Do this for the whole image and compare your results with the statement in Section 5.2.4 about a “best correlation’’. |
![]() |
5.4. [Possible lab project] Implement a program which radiometrically equalizes one image to another. Input images are easily created for a scene, for example, by varying illumination (use lights of different color temperatures), or by using an image tool that allows colors to be changed in some non-linear way. |
![]() |
5.5. [Possible lab project] Implement a program which estimates the camera’s attitude by using calibration marks distributed within your scene. Now make a slight change to the intrinsic parameters of your camera, and recalculate the extrinsic parameters. Do this for a common matrix-type camera and a panoramic camera model (if the latter is not available, then discuss this case theoretically using synthetic 3D data). |
Chapter 6
![]() |
6.1. Prove equation (6.3) by applying the sine theorem. 6.2. Calculate the spatial sampling resolution of a symmetric panorama EPR (100, 35, 60, 1) and EPL(100, 35, 300, 1); both images have an image height of 300 pixels. Off-axis distance and focal length are measured in millimeters, and the principal angle and angular unit are measured in degrees. 6.3. Calculate the horizontal sample distanceG5, the vertical sample distance V5, and the depth sample distance U5 for the same symmetric panorama as specified in Exercise 6.2. |
![]() |
6.4. [Possible lab project] Figure 6.8 illustrates how a synthetic 3D object (in this case, a rhino) is visualized by different inward panoramic pairs. For this lab project, you are requested to perform experiments as illustrated in this figure:
(i) Generate or reuse a textured 3D object of comparable shape complexity to the rhino. (ii) Calculate four stereo panoramic pairs for pairs of angles as used in Figure 6.8. (iii) Calculate symmetric (inward-looking) panoramic pairs, for ω = 190◦, 200◦, 210◦, . . . (i.e., in increments of 10◦). |
Chapter 7
![]() |
7.1. Assume that we wish to take a photo of a tree, 5 meters tall, using a camera whose vertical field of view is equal to 40◦. How far from the tree should the camera be positioned to ensure that both of the following specifications are satisfied: first, the tree is fully shown in the photo (assuming no occlusion); and second, the projection of the tree on the image covers 80% of the image’s height? (Hint: use equation (7.1).) |
![]() |
7.2. Suppose that we have a 17-inch screen of resolution 1,024 × 768, and a stereo viewing distance that is equal to 0.4 m. Show that the upper disparity limit is about 70 pixels. 7.3. Prove equations (7.12) and (7.13). |
7.4. [Possible lab project] This project is a continuation of Exercise 4.2. Having calculated corresponding points for the symmetric stereo panorama, apply the formula for calculating depth D based on disparity or angular disparity. This allows a set of 3D points to be generated in the 3D space, basically one 3D point for every pair of corresponding points. Try to obtain a fairly dense population of 3D points, and visualize them by mapping color values of original pixels onto those 3D points. Allow a fly-through visualization of the calculated set of colored 3D points. 7.5. [Possible lab project] This project is a continuation of Exercise 6.4. For your symmetric pairs of panoramic images (inward, in increments of 10◦), calculate corresponding points, the depth of projected 3D points, and evaluate the accuracy of recovered surface points in comparison to your synthetic 3D object. |
Chapter 8
8.1. Consider a situation where the closest and furthest scene objects of interest are at about 5 and 100 meters, respectively, and the preferable width of the angular disparity interval θw (according to the available stereo visualization method) is about 8◦. What is the minimum length of the extension slider (which specifies the off-axis distance R) needed to provide for stereo panorama acquisition? 8.2. Explain why the value of σw+ can be calculated by setting fR(σw)=D1 and then solving with respect to σw. 8.3. Consider the following intervals of the application-specific parameters:
[D1min,D1max] = [5, 7] meters, [D2min,D2max] = [60, 150] meters, [H1min,H1max] = [5, 6] meters, and [θmin, θmax] = [6, 10] degrees. What are the maximum values of R and ω, respectively? |
Chapter 9
![]() |
9.1. [Possible lab project] Implement a (simple) program which opens an OpenGL window and draws some simple geometric primitives, for example, a set of triangles which all together define a rectangle.
|
![]() |
9.2. [Possible lab project] Load an image as active texture into the memory and bind it. Assign texture coordinates to the vertices of Exercise 9.1. |
![]() |
9.3. [Possible lab project] Take a sample LRF file (available with this exercise on the book’s website) and implement a program which visualizes these data as a 3D point cloud. Secondly, mesh the points by (simply) using a neighborhood relation. Delete incorrect triangles which result from using only the neighborhood relations, and visualize your result. This method is possible for one 2.5D LRF scan. What is the procedure for combining several LRF scans together? |
![]() |
9.4. [Possible lab project] Take a sample LRF file (available with this exercise on the book’s website) and implement a program which visualizes these data as a 3D point cloud. Secondly, mesh the points by (simply) using a neighborhood relation. Delete incorrect triangles which result from using only the neighborhood relations, and visualize your result. This method is possible for one 2.5D LRF scan. What is the procedure for combining several LRF scans together?
|
![]() |
9.5. [Possible lab project] Modify the program in Exercise 9.4 to render two virtual camera attitudes into the same rendering context. In so doing, mask the rendering buffer for the first camera location for red=true, blue=false and green=false. Then render the second camera attitude with inverse masking into the same buffer. Use red-cyan anaglyph eyeglasses for viewing the resulting anaglyph image. Experiment with different base distances between both cameras. |
Chapter 10
![]() |
10.1. Verify the model equations for the three different panoramic sensor categories, by calculating (synthetic) object points as a function of Pw =F(i,j,λ) in 3D space, and by then determining the resulting image coordinates as a function of i/j =F(Pw). Use a λ of your choice when calculating the test object points. |
![]() |
10.2. [Possible lab project] This exercise is about pose estimation of a panoramic sensor.
|
![]() |
10.3. [Possible lab project] This exercise is about mapping texture coordinates onto the VRML model. Use the untextured test room from Exercise 9.4(2). For each vertex, calculate the corresponding image coordinate, with given camera attitude from Exercise 10.2(1); save the texture coordinates. Load the VRML file, now with the saved texture coordinates, as in Exercise 9.4. When mapping texture coordinates, use multiple camera locations. |
![]() |
10.4. [Possible lab project] Modify Exercise 10.3, now also including a raytracing check as follows:
|
![]() |
10.5. [Possible lab project] Use the data available for this exercise on the book’s website for rectifying an airborne image which was recorded with a CCD line camera. For each line, the attitude of the camera in world coordinates is given in a separate attitude file. Project those lines to ground planes of different (assumed) heights and compare results on “geometric correctness’’; therefore use a priori knowledge, for example the circular building as shown in figures in this chapter. Obviously, different ground plane heights do not only lead to a different scaling of the image. Explain. |