Tuesday, May 1, 2018

Lab 10: Radar

Goal:

The goal of this lab is to get a introduction and to gain a small amount of experience working with radar data and imagery.  In this lab a few different processes will be done to radar data, noise reduction using speckle filtering, spectral and spatial enhancement, multi-sensor fusion, texture analysis, polarimetric processing, and slant-range to ground range conversion.



Methods:

The first technique used was speckle filtering.  Speckle filtering takes a radar image that looks grainy and cleans it up.  This is done using a tool in ERDAS Imagine. The radar image used is from the Shuttle Imaging Radar (SIR-A).  The scene is of the Lop Nor Lake in the XinJiang Province in China.  The Radar Speckle Suppression tool was used in ERDAS and multiple parameters were set.   The coefficient of variation, coefficient of variation multiplier, and the window size were the parameters that were changed when running 3 different speckle filters.  The following image shows what images were output and what the parameters were on those images. The despeckled images were ran through the process multiple times.  The results from these filters can be seen in the results section.

Parameters
There are multiple radar image enhancement tools available within ERDAS, these are Wallis adaptive filter, Sensor Merge, Texture Analysis, and Brightness Adjustment.

Wallis adaptive filter is designed to adjust the contrast stretch of an image using only the values within a local region, which makes it widely applicable.  The image input into the Adaptive filter tool is despeckle 4, which is just a despeckled image, the window size is set to 3 and the multiplier is set to 3 as well, these are the only parameters changes in the tool.  The resulting image can be seen labeled "Enhanced" in the results section of the lab. The following image is the input image.
Despeckle 4

Texture analysis is another tool that can alter the sensitivity of texture within a radar image. The texture analysis tool in ERDAS is used to accomplish this.  The only parameter changed within the tool is the window size, which is set to 5.  The resulting image can be seen in the results section labeled "Texture".  This tool will highlight the areas of the image that contain large amounts of texture.  The following image is the input image.

Texture and Brightness Input
The next tool used is a brightness filter.  This works by adjusting the pixel brightness values so that each line of constant range has the same average.  The input image for this is the same image that was input for the texture analysis.  The resulting image will be labeled "Brightness" in the results section.



Results:

The despeckled images appear to be less grainy the more times they are ran through the filter.  Despeckle 1 looks to have fine grains of speckling and by the time is has been ran through the tool a few more times the speckles appear to be much large and combined together.  Running it through the tool does not necessarily make it a better more clear image,  the image can actually become distorted the more times it is ran through the speckle filter.

Despeckle 1
Despeckle 2

Despeckle 3

The following image is the image enhanced using the Wallis Adaptive Filter.  It appears to be more crisp and clean than the input image despeckle 4.

Enhanced
The texture analysis tool highlighted all the areas that have lots of texture.  It is hard to tell what anything is when just looking at the resulting image but pairing that with the input helps to make sense of what the high texture areas are. 

Texture
The brightness tool appears to have lightened up many of the darker areas of the image.  When comparing this to the input image it is evident when looking at the upper left corner of the image that many of the pixels appear to be brighter.

Brightness



Sources:

Cyril Wison

Tuesday, April 24, 2018

Lab 9: Hyperspectral Remote Sensing

Goal:

The goal of this lab is to gain knowledge on the processing and identification of features on hyperspectral remotely sensed data.  This lab will go through image spectrometry, hyperspectral images, and selected spectral processing basics.  Hyperspectral remote sensing is used when the user needs to have a more specific band with for looking at things like different type of plants.  That requires having very specific band widths. 

Methods:

This lab was done using the ENVI program.  An image was brought in and a few bands were examined specifically for a few predefined areas.  These areas areas looking at different types of minerals from around the area of interest.  The plots below show the spectral signature of each of these minerals. 



Next step is to compare the actual spectral signatures with signatures within the spectral library for each of these minerals. The image below shows the spectral signatures from the library to the signatures collected from the image.  It can be seen that the actual signatures have less fluctuation in the shorter bands.  This is most likely due to the signatures from the image having atmospheric scattering and absorption. 


In order to atmospherically correct this images a process called FLAASH will be used. This is a tool within ENVI.  The following images are different land cover areas and their spectral profiles before correction and after.

Vegetation
Urban

Water
It can be seen that the spectral profiles changes for these areas after atmospheric correction.  It gets to be more smooth and more accurate by correcting issues that were caused by scattering.

The last thing done in this lab was to use a tool called Vegetation Index Calculator.  This tool allows the user to show what areas have stressed vegetation and areas that do not.  Many different types of analysis can be done using the same process as the Vegetation Index Calculator, such as Fire Fuel, Ag Stress and Forest health.




Results:

The following image is the results of the forest stress tool.  The areas in red are areas that are stressed and areas in blue are not.  The blue areas on the left of the image are a forest and the area on the right is urban area.  That explains much of the reasons why those areas are stressed.

AgStress


Sources:

Cyril Wilson

Tuesday, April 17, 2018

Lab 8: Advanced Classifiers 2

Goal:

In the previous lab object based classification was used to get a higher classification accuracy than what was produced from the supervised and unsupervised classification techniques that were used earlier in the semester.  This lab will go over expert classifiers that are used to enhance the results of the previous classifications.




Methods:

To use an expert system classifier, the user will need to have a classified image they wish to classify more accurately.  To make a classified image more accurate the user can apply ancillary data to the image.  This can be done using the Knowledge Engineer tool in ERDAS Imagine.



The rules that are being set in the image basically say that if the classification in the image that is already classified is not actually what the classification says it is, give it a value of 1.  This will then change it to a different classification.  

The next step is to use the Neural Network as an expert classifier to create an image with a more accurate classification.  This will be done using the ENVI programs neural net tool.  Within the tool the image is input and parameters are set.  






Results:

This is the final image that was classified using the expert systems.  The output of the classification looks really good.  Most of the land cover areas are correctly classified on a qualitative level.  To see how accurate this image actually is an accuracy assessment would need to be done.








Sources:

Cyril Wilson

Wednesday, April 4, 2018

Lab 7: Object Based Classification

Goal:

In the past few labs pixel based classification was used to do supervised and unsupervised classification.  This lab will be exploring cutting edge object based classification.  This lab will use the eCognition software to do this.  Object based classification integrates both spectral and spatial information in order to classify objects. This lab will go through the process of doing two different methods of object based classification, random forest, and support vector.





Methods:

eCognition is a ground breaking software that is capable of very advanced classification.  This is the software that will be used in this lab for both support vector and random forest.

To begin the process the user needs to bring in a satellite image.  In this case a 2000 landsat image was used of the Eau Claire and Chippewa Valley area. The user then selects the band combination they wish to use, for this lab a 432 false color IR combination will be used.  The next step is segmentation and sample collection.  Segmentation is breaking the image up into small pieces, these are the objects.  The goal is for the objects is to contain only one land use/land cover class.  To create these objects the Process Tree tool needs to be used. In process tree, the user begins by naming the process and then what kind of process needs to be run.  The image below is what this looks like.



A few settings are very important when generating objects.  In this case multiresolution segmentation was used with a scale parameter of 9, shape of .3 and compactness of .5.  This creates objects on the image.  The image below is what the resulting objects look like.


The next step is to create classes. This is done by using Class > Class Heirarchy.  Once this tab is ipen the user simple right clicks and adds the classes by naming them and selecting the color. The next step is to collect samples for each of the classes.  This is similar to what is done when doing supervised classification. The user selects the class in their class heirarchy then double clicks an object that contains only the area they wish to include in the class.  The following table is the minimum amount of samples that should be taken for each class.


This is now the point that the process changes for random forest and support vector.  The first process that will be done is random tree.  To begin, random forest needs to be trained.  To do this a variable is created for the random forest (RF) classifier. This is done by creating another line in the process tree.  The following image is the parameters given to the classifier.



A few set of rules are also given to the classifier, these are given below.


The classifier is now ran.  The following image is the initial classified image.


A lot of the urban areas were not classified correctly so these areas were manually changed.

The next method is the support vector machines method. This method used a different algorithm when grouping pixels.  The only difference when doing this method changing the method to support vector machines.  The same objects and samples were used.  The results from both of these methods can be found in the results section.

The final image to be classified is a Unmanned Aerial Systems image.  This is an image taken from the University of Wisconsin Eau Claire drone fleet.  The process to classify this image is the same as doing satellite images.  In this case, the SVM method was used.  The only big difference for the UAS image was the scale parameter used.  The scale parameter used had to be put all the way up to 200 because the image is at such a large scale.  The resulting classified image can be seen in the results section.



Results:








Sources:

Thursday, March 29, 2018

Lab 6: Digital Change Detection

Goals:

The goal of this lab is to explore digital change detection.  Digital change detection is comparing changes in landscape by comparing pixel information between two images.  This is typically done when comparing land use and land cover.  Digital change detection is an important tool for monitoring environmental and socioeconomic phenomena through remotely sensed data.




Methods:

Write Function Memory Insertion is a simple and powerful method of visualizing changes that occur in landscape over time. This is performed by stacking the red band from a newer image, in this case a 2011 Landsat image the Eau Claire area, with the red band and near infrared of an older image, in the case a 1991 image was used of the same area as the 2011 image.  These images are then stacked by using the layer stack tool in Erdas Imagine.  The user then sets the layer combination to highlight the areas that have changed between the two images.  The following image shows the images that were used for what color.



The next method of change detection is done by calculating quantitative changes in multidate classified images. The first step to doing this method is to take the amount of pixels in each land cover/land use class and convert them into hectares.  This is done by referencing the attribute table of the image and taking this histrogram pixel value for each class and converting this first to square meters by multiplying it by 900 the multiplying the square meters by .0001.  This gives the amount of hectares for each land cover/land use class. The following images show the attribute table in Erdas and the conversion table in excel.



From the final hectare value the percent change is calculated by taking the 2011 hectare value for each class and subtracting the 2001 value.  This is then divided by the 2011 value and multiplied by 100.  This gives the percent change from 2001 to 2011.  The following table shows the percent change for each class.


The next section of the lab goes through the process of doing change detection in the Milwaukee Metropolitan Area (MSA) for the Department of Natural Resources.  The DNR is interested in the changes in urban/built up areas, forest, wetland, open space and agricultural lands.  This process will do change detection over a 10 year period using the Wilson-Lula algorithm.  This algorithm does change detection from images from different dates for each land cover class to those it is most concerned with.  In this case, the following image shows what classes were compared to what.


To achieve this, the model builder in Erdas image in utilized.


The 2001 and 2011 are the two raster images at the top of the model.  These images are then ran through a function that extracts just the areas of the image that are the desired class.  This is done by using an either if statement.  For example, if the land cover class is the desired number that it was assigned, it will give it a value of 1 if it is not it will give it a value of 0.  This is what the first set of functions does.  These are all saved in temporary raster files.  The next set of functions is to show the areas that have changed.  This is done by using the & bitwise function with the two outputs from the first function.  The output is then saved as individual rasters.


Results:






Sources:

Cyril Wilson

Monday, March 12, 2018

Lab 5: Classification Accuracy Assessment

Goals:

The main objective of this lab is to introduce methods in which the accuracy of a classified image is determined.  Accuracy assessments will be done to classified images. Accuracy assessment is very important when publishing classified data.  If the image has poor accuracy it is not ethical to publish and should be classified in a more through manner.


Methods:
The accuracy assessment will be completed on the unsupervised image that was created in lab 3.  For more information please refer back to lab 3.  The first step to finding the accuracy of the classification is to collect reference samples.  This was done by using the accuracy assessment add random points tool in ERDAS.  The following image shows the settings used for generating random points.


In total 125 points were created and stratified random sample was used.  This means a certain number of points will be included in each of the classification classes.  This minimum was set to 15 points for each class.  The following image shows the random points on the reference image.



The next step was to go through and determine what land cover each point is in on the reference image.  This was done by showing ten points at a time and zooming in on them and filling out what category they are in on the accuracy assessment table.  The following image is of point number 1 and the corresponding table where the actual land cover class was recorded.  Once the classification was recorded, the point turns yellow so the user knows they already did that point.  The process was repeated for all 125 points.


The next step is to produce the accuracy report.  This is done by using the produce accuracy report tool in ERDAS.  This is simply a button in the accuracy assessment table that creates a report.  From this report, an error matrix and accuracy reports are transferred to an excel table to be more visually appealing.  The resulting tables will be displayed in the results section of this blog.

The last step to this lab was going through the same accuracy assessment process but with the supervised classification image created in lab 4.  A comparison of the accuracy's will be showcased in the results section.


Results:



The unsupervised image got a accuracy percentage of 48.80%.  This turns out to be very poor.  This is likely the result of using unsupervised classification with only 5 classes and maybe because this is the first time the user has done this.  There could have been error in both the classification portion of this process but also when conducting the accuracy assessment.  Mostly, the unsupervised classification method does not allow the user to customize their classes so they cannot train the computer on what pixels belong in what class.





The supervised image resulted in 62.69% accuracy.  This is still not good but it is better than the unsupervised image.  While conducting these it was easy to notice that there are many different shades of green and brown that many ag fields look like when not being used or have weak crop.  These light green brown colors often got confused with bare soil and urban classes.  To get a better result the classification would have to be redone with more meticulous choosing of samples when training the classifier.

Sources:

Cyril Wilson

Thursday, March 8, 2018

Lab 4: Pixel-Based Supervised Classification

Goal and Background:

The extraction of information from remotely sensed images is a very important aspect of the remote sensing discipline.  One method of extracting information is land cover classification.  In the previous lab unsupervised classification was used.  In this lab, supervised classification will be used.  Supervised classification involves more user input and oversight than unsupervised, as the name suggests.

Methods:

The first step in this method is to collect samples from the image to "train" the classifier.  The following image (figure 1) shows the amount of samples that should be collected from each class.

Now the user must collect spectral signatures.  This is done by using the signature editor and polygon tool in ERDAS.  The following image is what the polygon and its corresponding signature looks like in the signature editor.



In total, 50 signatures were collected from around the image.


To ensure the signatures are accurate each of the signatures should be examined in profile view.


Looking at the water signatures, they match those signatures of non-turbid water.  This is done for each group of spectral signatures.



The classes are then analyzed to make sure there are no bad samples.  This is done by examining the histograms, image alarm and the signature separability.  If bad or outlier signatures are found those should be deleted from the project and new signatures should be taken in it's place.  Once that is done the signatures are combined into their respective classes.  The following image shows the spectral signatures of the classes combined.



The next step is to actually perform the supervised classification, the classes have now been trained.  This is done using the supervised classification tool in ERDAS.  The user inputs their image and the classification scheme they just created.

Finally a visually appealing map was created in ArcMap with this final raster image.


Results:

The supervised classification method turned out okay except that there was much overlap between the forest and agricultural areas.  This is because the samples taken had a similar spectral signature in the first 3 bands.  There is also much overlap between the urban and bare soil areas.  This can be fixed using more advanced methods of classification.

The following image is the unsupervised classification that was created in lab 3.  The supervised and unsupervised classification resulted in very different results.  Most of the unsupervised map is agriculture and most of the supervised map is forest. 


In later labs, more advanced methods of classification will be explored that will fix some of the problems these simple methods of classification have.


Sources:

Cyril Wilson