Tuesday, May 6, 2014

Lab 8: spectral signiturre analysis

In this lab students conducted exercises that better their understanding of spectral reflectance signatures of various Earth surface and near surface features.  To practice this skill students used a Landsat ETM+ image of north central Wisconsin and eastern Minnesota to digitize and record spectral tendencies of various features.

specficaly we were asked to collect spectral data on the following features:







To obtain these spectral signatures, all one has to do was draw a polygon containing a uniform piece of the land/feature you are interested in and run an operation using the signature editing tool under the raster tab to collect the data.  after collecting the information this is what I was able to obtain:



The data collected showed that plants generally reflect a lot of green light and absorb longer wavelengths.  The only differences was that some plants absorbed more levels of light, indicating that they were more active/were producing more food for themselves to consume.  man made features like roads, parking lots and runways had high reflectivity in the longer wavelengths, but absorbed more blue light than other features. 

This skill could be one of the most important we've learned so far in this class, in that it can be applied to track the fluidity and function of various systems in the natural environment.



 

Thursday, May 1, 2014

Lab 7: Photogrammetry

The goal of this laboratory exercise was to develop the skills necessary to preform photogrammetric tasks on aerial photographs or satellite images.  Specifically, this lab is tailored to train students to better understand the mathematics involved in calculating scales, measuring areas and perimeters of features, and calculating relief displacement.  Moreover this lab also offers an introduction of to stereoscopy and satellite image orthorectification.

In the first 3 sections of the lab, students conducted used both mathematical and computer generated methods to discover either the scales, record area/perimeters, and measure relief displacement.  The measurement of area/perimeters was done via the ERDAS imagine digitizing tool. 

The calculating of scales was done using the equation: s = f/H' where;
S= scale
f = focal length
H'= height of image taken - height of area above/below sea level.

calculating relief displacement was done using the equation: D = (hxr)/H where;

D= displacement
h= height of object
r= radial distance object is from principle point
H= height of plane taking photo



Part 2: Stereoscopy

In this portion of the lab, students uploaded two images into ERDAS imagine, one was an image of the city of Eau Claire and the other was a digital elevation model of the same area (DEM).  Students than used an anaglyph operation to create stereoscopic image of the area.  With polarized glasses, looking at this image produces a 3d image of the area showing both the features of the land, and the differentiating relief across the city. 




The image produced was accurate in its relief, but also introduced an element of geometric scaling error.  This error is produced when an image has height variation, but has the same general scale.  The result is taller features displaying relief displacement, where they are oriented at odd angles in relevance to their actual appearance in the real world

here we see a smoke tower that is altered due to relief displacement
on the upper campus UWEC
 



















Part 3: Orthorectification

orthorectification is a very intensive process that involves heavy input from the user.  The subject area of the images being corrected was that of Palm Springs, California.  By using the Lecia Photogrammetric Suite  (LPS) digital photogrammetry tool, students used control points and tie points to triangulate, and ultimately orthorectify the images we collected into one.  we used multiple scale images and also a digital elevation model to create the output image where relief displacement and other geometric errors have been removed, along with an overall improvement of accuracy.














Friday, April 18, 2014

Lab 6: Geometric Correction

 
 
For all activiaties done in this lab, the images used were from The United States Geological Survey 7.5 minute digital raster graphic data collection.

Part1: Image-to-map rectification

In the first part of the lab, students worked with image-to-map rectification operations. In this case, the image and map in use were covering Chicago and surrounding areas near the Wisconsin and Illinois border.  
students conducted a first order polynomial special interpolation technique. to do this students placed ground control points (GCP's) on the same locations on the image (distorted Image) and the map (reference source).  The computer than uses these GCP's in algorithms that adjust the image to the points on the map, creating an image that is now has accurate geographic properties, closer to their real world location. The number GCP's needed for a given operation depends on the degree of the polynomial operation being done. Since the adjustment is only a minor first order polynomial (linear adjustment) its only necessary to use 3.  With that being said, its advised by the lab to use 4 for the sake of maximized accuracy.  Since the adjustment being made are relatively minor, it is appropriate to just use a nearest neighbor resampling method.

I was able to reach RMS error values lower than the requested 2 % for all four of my GCS points. One should always strive to have the lowest values possible for RMS error in order to ensure an accurate image .

Part 2: Image-to-image rectification

This part of the lab covered a part of Sierra Leone, Africa, and required that I do similar steps as I used above, but this time the components are both images.  There is one image that is geometrically accurate and one that is not.  This can easily bee seen by simply overlaying the two images, and using the swipe tool to see how the features are off from there actual location. Due to the high level of distortion, the operation used was a 3rd order polynomial spatial interpolation. This required that we place a minimum of 10 GCP's on both the reference and distorted image. For the sake of increased accuracy I used 2 extra GCP's, using a total of 12 to correct the distortion.  
Due to the high level of correction being done, the resampling method I used was the Bilinear Method.  This was the selected method because there was more pixel redistribution due to the high level of the original distortion.   

The image produced was still slightly distorted, highlighting the fact that getting the lowest value of RMS error is critical.  even the lab suggested that I get lower than 1.0% for all the points, its advised to get that value as low as possible for the sake of accuracy.


 
 
 
 
 
PART 2: Image to image rectification: map on left is the distorted image, map on the right is the geometrically accurate Image.
PART 2: This is the corrected image being laid over the already corrected image, Its nearly perfect wit still some distortion visible at the corners.

 

Wednesday, April 16, 2014

Lab 5: Image mosaic and miscellaneous image functions 2

This lab was given to us in order to introduce us to various operations in ERDAS imagine like: image mosaicking, spatial/spectral enhancement, band ratio, and binary change detection.  Each of these operations, in one way or another, allows you to manipulate an images spectral or spatial qualities in order to make better interpretations about what important features or details an image posses.

Part 1: Image mosaicking

Image mosaicking is the process of combining two or more images into one, in order to increase the visible area of study.  In this lab, we used two different mosaic operations: Mosaicexpress and MosaicPro.




mosaic pro
mosaic express



As you can tell by the images, mosaic pro offers a much smoother and more aesthetic image.  This is because through this operation one has much more user input meaning students have more options in terms of ways the pictures can be synced into one. In this case, we did histogram matching.  Mosaic express is a much simpler operation that only requests an input file and the name of an output file.



 Part 2: Band ratioing.


In this part of the lab, students used the NDVI index  (normalized difference vegetation index) which can be summarized as (NIR - Red bands / NIR + Red bands) in order to highlight areas of high vegetation and low vegetation.


the areas that are not white indicate the patches of the earth where vegetation has been removed

















Part 3: Spatial and Spectral Image enhancement

In the first part of this section, students used spatial enhancement techniques to alter the frequency of an image. frequency is defined as the rate at which brightness values change over a given space.



the image on the left is the original. the image on the right is the image after the 5x5 low pass operation.
The operation conducted was a 5x5 lowpass spatial enhancement.  This operation was appropriate due to the high frequency of the original image, which made the image a very salt and peppery look to it.  conducting the lowpass operation made the image much smother and more consistently toned.


The image on the left is the original. The image on the right is the image after a 5x5 low pass operation.

 



Another filter that students were asked to use was Laplacian filter.  In this case we were instructed to use a 3x3 Lablacian edge detection operation.  The resulting image provided a more neutral appearance of the colors.  The filters intent is to increase the contrast at areas where there is transition.





The image on the left is the original, the image on the right is the image after the 3x3 Laplacian edge detection.




Section 2: spectral enhancements


In this portion of the lab students conducted various levels of spectral enhancements based of off the type of histogram that the original image had.  The operations used were min/max contrast stretches, which are most appropriate for images with low contrast (Gaussian or near Gaussian histograms).  The other operation we used was  a Piecewise stretch, which is more appropriate for histograms that have wider ranges of pixel brightness (none Gaussian histograms)






min/max contrast stretch. The lowest value of the histogram is stretched to 0 and the max value of the histogram is stretched to 273








piecewise stretch. A linear enhancement technique employed on various parts of the histogram, based off of the portion the user would like to enhance.  





The third technique of spectral enhancement we used was histogram equalization. This is a nonlinear method where pixels at the peak of the histogram are stretched, adding contrast.  At the same time, pixels near the end of the histogram are clustered, lowering contrast.  Overall though, the contrast of the image is increased and the histogram is flattened out.



the image on the right is the original. the image on the left is the same area after a histogram equalization operation.



 
 

 

 
Part : Binary change detection.
 
 
In this portion of the lab. students analyzed the pixel differentiations between two different images, taken years apart, of the same area.  Through a binary change detection, we could combine the two images into one.  This is done through a model of subtraction where you minus the 1991 image from the 2011 image.  Even with the resulting image, its nearly imposable to tell where the pixels are different (and there fore land use/land type is different). in order to tell where change had occurred we uploaded the image to ARC map.  on this program we overlaid the 1991 image with the image indicating the changed areas and were thus able to produce this map: 
 

 
 



 

 

 
 
 
All data and images used are from ERDAS imagine.
 
 

 




















Thursday, March 27, 2014

Lab 4: Miscellaneous image functions

In the second half of our remote sensing class we have more or less moved on from simple image interpretation and are now moving onto ways in which we can alter or combine image(s) in order to produce views of a given area that are better suited for interpretation.  The specific tasks we learned in this lab were: creating subsets, conducting image fusion, radiometric enhancement techniques, linking images to Google earth, and resampling.


Subsets

their are two ways to create a subset. one way is by using an inquire box, and the other way is done by using an area of interest shape file. an inquire box works well if you are just looking at geographical area of interest, but say you area of interest is not a complete square, like a county.  under such a circumstance one would use an area of interest shaper file,




Above is an image created by an inquire box subset.


above is a subset created by using a shapefile of the area of interest.



Image Fusion

image fusion is an operation where one or more images are combined to create an image better suited for visual interpretation. in this instance, I conducted a pan sharpen procedure, where by combining a course yet colored image with a panchromatic black and whit image. the resulting image is one with a high spatial resolution that also has color.




the image on the left is the original reflective image. the one on the right being that image fused with a panchromatic image of the same area.



Radiometric Enhancement Techniques  

Radiometric enhancement techniques are used in operations to enhance an images spectral and radiometric quality. In this case, the task at hand was haze reduction.  haze can cause an image to become distorted so that the features below said haze are harder to label and distinguish. 



the bottom image is distorted by haze. the top picture is the resulting picture after I conducted the radiometric technique of enhancement known as haze reduction.


Linking Image Viewer to Google Earth

perhaps one of the most useful tools on ERDAS IMAGINE is the ability to sink the view satellite image you have to the same location and scale as a Google earth.  This provides a great selective interpretation key that allows you to see what certain features are via text and symbols.  by sinking the views, as you scroll/zoom around on ERDAS, one can see various features on the Google maps being displayed via symbols or text.






Resampling

Resampling is the process of changing the pixel size of given image.  In resampling, pixel sizes can be enlarged or shrank.  In this instance, I used two different to procedures to reduce pixel sizes. 
The first method I uses is call Nearest Neighbor. This procedure causes pixels to match the values of the one closest to them. this is the simplest method, but can cause 'stair-stepping' where pixels appear to overlap and cause an overall rougher image in comparison to the original.

The second resampling method I used is bilinear interpolation. In this method pixels within a 4x4 area are averaged out to create a newer smaller pixels. the resulting image is one that is smoother than the original image, but will have distortion in contrast due to the averaging of the pixel values. 



Top left: original image
Top right: Nearest Neighbor resampling image, visible roughness due to overlapping
Bottom left: Bilinear Interpolation resampling method, smoother image but less contrast   between colors