Wednesday, 20 April 2016

Lab 6: Geometric Correction

Goals and Background:

This lab is designed to introduce you to a very important image preprocessing exercise known as geometric correction. This lab is structured to develop your skills on the two major types of geometric correction (image to map rectification and image to image registration) that are normally performed on satellite images as part of the preprocessing activities prior to the extraction of biophysical and sociocultural information from satellite images.

Methods:

Image-to-map rectification

This geometric correction uses a map coordinate system to rectify/transform the image data pixel coordinates. This particular imagery is from Chicago, Illinois. In order to rectify the images, one needs to access the Ground Control Points (GCPs) via the Multispectral tab, then Control Points, Select Geometric Model, then Polynomial. The first order polynomial transformation was the model used for this image-to-map-rectification. With the 1st order, only 3 GCPs are required. However, these points need to be spread throughout the image to most accurately adjust the incorrect image (figure 1). It is best to place the GCPs in areas on the imagery that can be easily recognizable, such as an intersection. While placing the GCPs, one is place on the uncorrected image and then another is placed on the map image in the same spot. When placing the GCPs is all said and done, one needs to look at the total root mean square (RMS) error in the bottom right corner of the Multipoint Geometric Correction. The lower the total RMS error number, the more spatially accurate the imagery is. For this particular model, we needed to get under 2 for the total RMS error. The first GCPs total RMS error was not under 2, so I had to go back through each GCP and try to move it so the RMS error was getting smaller, eventually under 2. After correcting the GCP locations, the nearest neighbor interpolation method was run to resample the imagery. The corrected imagery can be viewed in figure 4.    

Figure 1. During the process of assigning GCPs. 

Image-to-image-registration

This geometric correction uses a previously corrected image of the same location to rectify/transform the image data pixel coordinates. This particular imagery is from Sierra Leone in Africa. Like the image-to-map rectification, the image-to-image registration required obtaining GCPs from the imagery. However, instead of using 3 GCPs, 10 were used for this method. The imagery from Chicago was set to be transformed using 1st order polynomial transformation, whereas the Sierra Leone imagery used the 3rd order polynomial transformation. The total RMS error needed to be under 1 for this model, as set by Dr. Cyril Wilson. However, the industry standard is set to .5, or half of a pixel. After tedious work, I was able to achieve a total RMS error of .4381 (figure 2). Once the GCPs were collected, the bilinear interpolation method was run to resample the imagery. The juxtaposition between the pre and post geometrically corrected images can be seen in figure 3. The corrected output imagery can be viewed in figure 5.

Figure 2. Multipoint Geometric Correction interface. Note the .4381 total RMS error.


Figure 3. Pre and post geometric correction for the 3rd order polynomial of Sierra Leone. The corrected image is below the geometrically corrected image.

Results:

Image-to-map rectification:

Below is the result of the image-to-map rectification. This uses a planimetrically correct map as a reference for an uncorrected image. The image is corrected via GCPs to 'match up' the features on the image to the map (figure 4).



Figure 4. The 1st order polynomial corrected image is on the left and the uncorrected image is on the right.
Image-to-image-registration:

Image-to-image registration is similar to image-to-map rectification, except that image-to images uses an image as the 'correcting' mechanism. This 3rd order polynomial used 10 GCPs instead of the 3 that were used for the 1st order image-map rectification. Note that the output image for image-to-image registration is very hazy. Ideally this should be corrected in order to execute a proper analysis (figure 5).


Figure 5. The geometrically corrected image for the 3rd order polynomial.
Conclusion:

Although geometric correction can seem like a hassle at the time, it is vital to accurately analyze imagery. The more GCPs used for a geometric correction, the more accurate the image. It was difficult at first to get the total RMS error under 2 (for image to map rectification) and under 1 (for image-to-image-registration). However, the longer I practiced, the quicker I was able to accurately locate the GCPs. I think this process will go much smoother in the future! It is also important to be aware of the imagery one uses. The output for the image-to-image-registration image's bilinear interpolation is very cloud heavy, which will make analysis very very difficult. It is important to consider both the image to be corrected as well as interpolation method in order to produce the best output image.

Sources: 

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.

Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Sunday, 17 April 2016

Lab 5: LiDAR

Goals and Background:

The main goal of this lab exercise is for students to gain basic knowledge on LiDAR data structure and processing. Specific objectives encompass 1) processing and retrieval of various surface and terrain models, and 2) processing and creation of intensity image and other derivative products from point cloud. LiDAR is one of the recently expanding areas of remote sensing with significant growth in new job creation. This lab will include working with LiDAR point clouds in LAS file format.

LiDAR is an active remote sensing system which utilizes a suborbital sensor. These sensors can be attached to manned (airplanes) or unmanned (drones) aerial systems. The lasers on the sensors send pulses to the ground; as the come into contact with object such as trees, a 'return' will be sent back to the sensor. As shown in figure one, there are
Figure 1. Diagram of how LiDAR works using a suborbital sensor, in this case a plane. The laser sends out a pulse and returns back to the sensor many times before it hits the bare earth.

Methods:

Point cloud visualization in Erdas Imagine

Without having seen a LiDAR point cloud, it can be very hard to visualize its output. Our first step in this lab was to gain a raw understanding of what a point cloud consists of. The image below displays 'returns'. The red points indicate the first return, whereas the blue points indicate the last return or no return at all (figure 2).


Figure 2. LiDAR point cloud of Eau Claire, Wisconsin.

Generate a LAS dataset and explore LiDAR point clouds with ArcGIS

The rest of this lab was conducted in ArcMap. In order to do this, however, the LAS dataset (the LiDAR point cloud) needed to be projected. When an LAS dataset is not projected, one must have a tile index and metadata to ensure a correct projection. To begin this process, a LAS dataset was created in ArcMap. Under the LAS Dataset Properties window, all of the LAS tiles of Eau Claire county were added. The dataset statistics were run in order to determine basic information such as x, y, and z max/min/range. Examining this numbers verifies that the data is correct (i.g. the z values match the elevation of the study area).

If no coordinate system is established for the LAS files, projecting the LAS dataset can be done within the LAS Dataset Properties window, under the XY Coordinate System and Z Coordinate System tabs. It is important to consult the metadata when projecting a coordinate system. NAD 83 HARM Wisconsin CRS Eau Claire (US Feet) was used for the XY coordinate system and NAVD 1988 US feet was used for the Z coordinate system for this lab (figure 3). In order to ensure that the tiles were projected correctly, a county shapefile was used to compare locations.
Figure 3. LAS dataset points in ArcMap. Notice the point's elevation variation on the upper left corner. Also, the white areas indicate areas of 'no data'; this is because these areas are comprised of water and therefore do not provide a 'returnable' surface for lasers.
When displaying the points as a TIN in ArcMap, there appears to be some anomalies that have drastically higher first returns than the surrounding landscape. This is likely due to some form of interference, likely a bird or something of the sort (figure 4).

Figure 4. First Return anomalies. Note the 'red' dots in the water body; these are likely birds or some other interference.
Another useful tool in ArcMap is the 3D viewer, which allows the viewer to see their LiDAR point clouds in...you guessed it...3D! (figure 5). The images can also be viewed as a profile. The profile below is of a cross section of the Chippewa River (figure 6).
Figure 5. 3D viewer in ArcMap.

Figure 6. 3D profile in ArcMap.
Various models were used for the rest of the lab. These provide the framework in which LiDAR is used most often. Before the models could be used, however, parameters needed to be set. These were established under the Layer Properties for the LAS database. The parameters included displaying the points only as elevation via first returns. In order to be able to run the models, the LAS dataset needed to be converted to a raster. This was done using the LAS Dataset to Raster tool. Parameters within this tool included setting the Value Field = Elevation, Cell Type = Maximum, Void Filling = Natural Neighbor, and Cell Size = 6.56168 (or 2 meters).

Multiple images were created once converting the image to a raster. This included creating digital surface models (DSM) (output image in figure 7) and digital terrain models (DTM) (output image in figure 8). A DSM looks like a special version of LiDAR, whereas the DTM only shows the bare earth without any building features on it.

Once the two main rasters were created, more rasters were developed, this time using hillshade (output images in figure 9 and 10).

The final tool aspect of this lab was highlighting the 'Intensity' of the image. This was done in the same way as the other images in this lab, except the Value Field was changed to 'Intensity' (output image in figure 11).

Results:

The digital surface model (DSM) illustrates the LiDAR points collected from the first return. This imagery includes buildings, trees, etc that are 'picked up' by the LiDAR sensor (figure 7).


Figure 7. Digital Surface Model (DSM). 
The digital terrain model (DTM), on the other hand, gives a very generalized picture of the landscape. This makes sense because the DTM takes the last return, thus giving the 'closest to the actual ground' elevation (figure 8).

Figure 8. Digital Terrain Model (DTM).
The hillshade tool was on the existing DSM imagery seen above in figure 7. Rather than looking like a photo, like a DSM, the hillshade 'normalizes' the black and white gradient and really draws attention to the individual buildings and landscape features by creating shading by all of the features (figure 9).
Figure 9. Hillshade of DSM.
Similarly to the hillshade of the DSM, the hillshade of the DTM draws more attention to the landscape. However, the DTM highlights the overall natural landscape, such as the paleochannels, much more than buildings and the like. This is due to the fact that this imagery is produced from the last return of the LiDAR sensor, thus replicating the bare earth (figure 10).

Figure 10. Hillshade of  DTM.
An intensity image was created to illustrate the strongest voltage returned. Having this imagery aids in interpreting and classifying LiDAR masspoints, which is used in Lidargrammetry (figure 11).

Figure 11. Intensity image of Eau Claire LiDAR raster.

Conclusion:

LiDAR technology is quite new compared to satellite remote sensing. Unlike images from space such as Landsat, LiDAR allows the user to truly customize the extent of their imagery, density of points collected, temporal resolution amongst many other factors. I am looking forward to doing further LiDAR analysis in the future, as just the few things that I have done in this lab have been exciting. I have used DTMs in the past, but never created them for myself. LiDAR is another useful tool to add to my geospatial toolbox.

Sources:

LiDAR point cloud and Tile Index are from Eau Claire County, 2013.

Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014.