Thursday, December 10, 2015

Lab 8

Goal

The main goal of this lab is for us to gain experience on the measurement and interpretation of spectral reflectance (signatures) of various Earth surface and near surface materials captured by satellite images. In this lab, we will learn how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to verify whether they pass the spectral separability test discussed in lectures. This is a prerequisite for image classification which will be covered in the advanced version of this class. At the end of this lab, we will be able to collect and properly analyze spectral signature curves for various Earth surface and near surface features for an multispectral remotely sensed image. 


Methods

For lab eight, we weren't given a whole lot of instruction. It may have been the "shortest" lab yet. We were given 12 different materials and surfaces from the eau_claire_2000.img. The twelve surfaces were, standing water, moving water, vegetation, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, concrete surface. We were given instructions for digitizing the first surface. Lake Wissota is a perfect place to digitize standing water. After digitizing, we were able to enter the digitized area into the signature editor. We were then told to digitize areas for the rest of the surfaces. Honestly it was pretty difficult to find some of the surfaces on the map that would have a large enough area to effectively digitize. Because of this, I felt like some of my plotted lines might not be as accurate as they should be. When the data was in the signature editor, we could then plot the line for the data in the signature mean plotter. That was the largest part of the lab. The rest of it was just analyzing the plots and comparing them to the other plots. 

Results

The following images are the individual plots and then a combined plot of all of the digitized surfaces taken in the lab. The final image is comparing dry and moist soil. 























Sources

All sources were provided to us through the geog 338 drive. The image  eau_claire_2000.img was included in the sources provided.















Thursday, December 3, 2015

Lab 7

Goal

The main goal of this laboratory exercise is to develop our skills in performing key photogrammetric tasks on aerial photographs and satellite images. Specifically the lab is designed to train us in understanding he mathematics behind the calculation of photographic scales, measurements of areas and perimeters of features, and calculating relief displacement. Moreover this lab is intended to introduce us to stereoscopy and performing orthorectification on satellite images. At the end of this lab exercise, we will be in a position to perform diverse photogrammetric tasks. 


Methods

This lab was divided into three different parts. Part one revolved around scales, measurements and relief displacement. The first section of part one was simply just to find the scale of two images. The first image we were given a real world measurement from one point to another. We just needed to make the same measurement on the map in order to get the scale. We then simplify the fraction to get a nice scale. The second image gave us the elevation of the image and the focal length of the lens. In the second section of part one, we found the perimeter and area of a lagoon in the image. to do this we just digitized on the map and our final polygon gave us the measurements we were looking for. In the final section of part one, we calculated the relief displacement of a smokestack in an image. That problem required that we use the relief displacement equation. Once we took the measurements and plugged in the given numbers, the equation gave us the relief displacement. 

Part two of the lab was all about stereoscopy. For this part, we used the Anaglyph Generation tool to create an anaglyph image. We used polaroid glasses to observe the image. The anaglyph image produced a 3D image where we were able to observe elevation changes in Eau Claire. For the most part, the elevation changes were accurate. There were some areas where there were sudden elevation changes where they shouldn't be. 

Part three of the lab was the largest section. This part had us work with orthorectification. We were asked to create a new project in Erdas Imagine. We brought in two images into the same viewer and then went into the IMAGINE Photogrammetry tool. We then were supposed to create a new block file. This contained a lot of  small steps to make sure our block file was exactly how we wanted it. In the next section of part three, we were to add imagery to the block and define the sensor model. This part was relatively quick. We just added our image and customized it so that all of the data would work together.  The next section was the most time consuming. We were asked to activate the Point Measurement tool and then collect GCP's on our images. We added 11 different GCP's to all the images to make sure that they were all tied together well. After we made sure that the X,Y coordinates were matching up on all of the images, We added Z coordinates to the image. After we completed all of the and made sure our images were correctly tied, we went onto the next section of part three. This was where we activated the automatic tie point collection, triangulation, and ortho resample. The automatic tie point collection allowed the program to collect many more GCPs for us to make sure that the image was accurately tied down in many different areas. The more GCP's you have entered, the easier it is for the program to output an accurate image for you. Once we double checked all of our points to make sure that they were accurate, it was time to perform triangulation. We filled out all of the necessary information to perform the triangulation and the ran it. After that, we started the ortho resampling process. Again we filled out the dialog box and made sure all of the information was input correctly. After running he process, we were able to bring in the two orthorectified images into the Erdas viewer. I was amazed by how accurately the two images were stacked on top of each other. Below is an image of my final orthorectified images. 


Results




Sources

All data was provided to us.

 National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005. Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010. Spot satellite images are from Erdas Imagine, 2009. Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009. National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.

Thursday, November 19, 2015

Remote Sensing Lab 6

Goals

This lab is designed to introduce us to a very important image preprocessing exercise known as geometric correction. The lab is structured to develop our skills on the two major types of geometric correction that are normally performed on satellite images as part of the preprocessing activities prior to the extraction of biophysical and sociocultural information from satellite images. 


Methods

The first part of this lab was dealing with image-to-map rectification. We started off in Erdas Imagine and were looking at the Chicago_drg.img image. We compared it to the Chicago_2000.img. We needed to use the first order polynomial equation to geometrically correct the Chicago_2000.img. We used the Multipoint Geometric Correction window to place GCPs on both maps. Because this was a first order, we only needed to use four GCPs. Once all the points were placed, we needed to make slight adjustments to their placements so that we could lower our Root Mean Square error. For this part, we were just supposed to reduce it below 2.0. The image below is the comparison between the original image and the corrected image. 





In par two of the lab, we did a similar exercise. We were using image to image registration. We were working with two different images of Sierra Leone. One of the images had some pretty serious distortion that needed to be fixed. We did the same process where we opened the Multipoint Geometric Correction window. The difference here was that we changed it to a 3rd order polynomial. This meant that in order for it to work, we needed to have at least ten GCPs on each map. Because of this, when we finished plotting points, our RMS error was very high. We had to work with all the points and adjust them ever so slightly  in order to achieve an RMS error below 1.0. I was actually able to get my RMS error to .0125 which I felt pretty good about. The image below is the screen shot of both images with their GCPs in place. 





Results

From this lab, we learned how to deal with some of the distortion that we might find in images that we are working with. Although this was a shorter lab compared to our other ones, geometric correction is a very important tool when working with remote sensing images. This lab helped us work through and develop more accurate images. 

Thursday, November 12, 2015

Lab 5

Goal

The goal of this lab is to gain basic knowledge on Lidar data structure and processing. Some of the objectives are processing and retrieval of various surface and terrain models, and processing and creation of intensity image and other derivative products from point cloud. 


Methods

Part one of lab 5 was "Point cloud visualizatoin in Erdas Imagine". In this section we used Erdas to view a lidar point cloud file. We looked at things like the metadata and tile index. We then opened the same image in ArcMap because it gives a better interface to work with the image. 
The real fun started in Part two of the lab. The title was, "Generate a LAS dataset and explore lidar point clouds with ArcGIS. We were supposed to pretend that we were a GIS manager working on a project for the City of Eau Claire. We had acquired lidar point cloud in LAS format for a portion of the City. We first wanted to initiate an initial quality check on the data by looking at its area and coverage, and also verify the current classification of the lidar. We were given the tasks of creating a LAS dataset, explore the properties of the dataset, and visualize the LAS dataset as point cloud in 2D and 3D. 
We started off by creating a LAS folder and a dataset in the folder. We added in data that was given to us. From there, you could look at all the data and observe all of the values that came with it. We added a coordinate system to the dataset for both the XY coordinate system and the Z system. Once the dataset was finished, we brought it into ArcMap and explored the data. Looking at the elevation and point density on the map. 
In Part 3 of the lab, we worked on the "generation of lidar derivative products". This meant that we were deriving DSM and DTM products from our point cloud data. We used tools in ArcMap to create our DSM, DTM, and Hillshade images. The image below, is one of the Hillshade images
 In the last section of our lab, we derived a lidar intensity image from point cloud data. The image below is a screenshot of my lidar intensity image taken from Erdas Imagine.






















Sources

All data used was given to us and explored in ArcMap and Erdas Imagine. 

Thursday, October 29, 2015

Lab 4

Remote Sensing Lab 4

Jeff Schweitzer


Goal and Background

Lab four was designed to help me learn how to execute miscellaneous image functions. Some of these functions include: delineating a study area from a larger satellite image scene, demonstrating how spatial resolution of images can be optimized, introduce some radiometric enhancement techniques, link a satellite image to Google Earth, introduce me to various methods of resampling satellite images, explore image mosaicking, and expose me to binary change detection.


Methods and Results

In lab four, we started out by going over image subsetting. There were two ways in which we did this. the first was with an inquire box. The second way was with an area of interest shape file. The following images are the result of part 1 of our lab.




In part two of lab four, we learned image fusion. We started with a coarse resolution image and changed the spatial resolution to make our final image more appealing. We utilized pansharpening to make the image easier to view at a large scale. We did this by combining our image with its panchromatic counterpart which has a higher resolution. The end result was a more clear version of our original image. 

Part three of the lab was where we learned simple radiometric enhancement technique. This is a useful technique to reduce haze that will sometimes appear in satellite images. It allows us to view the image clearly and completely. 

In our fourth part of lab four, we learned how to link an image viewer to Google Earth. This was very interesting to work with. It made it easier to compare the land and the image. We also learned in this section that Google Earth can be considered a selective key. 

In part five of lab four, we worked with resampling. We resampled an image using both the nearest neighbor method as well as the bilinear interpolation method. When we compared the nearest neighbor resampled image to the original image, there wasn't much of a difference. However, when we observed the differences in he bilinear resampled image, we could see a definite improvement in visual appeal. 

In part six of lab four, we worked with image mosaicking. We needed to combine two images that stacked and connected to each other. the first time we used Mosaic Express. This produced an image that could clearly tell where the two images were combined. The second method we used was MosaicPro. This method allowed for much more customization. It took longer to produce, but the end result was a much higher quality mosaic image. The following images are the results from part six.



Part seven of lab four taught us binary change detection. This is otherwise known as image differencing. In section one we created a difference image. For this we used the "Two Input Operators" interface. When viewing the histogram, we could figure out where the points that differed from the original image were. In the second part, we used a formula to map out the pixels that changed instead of creating an image to show it. The following images are the results of part one and two of part seven.