Tuesday, April 24, 2018

Lab 9: Hyperspectral Remote Sensing

Goal:

The goal of this lab is to gain knowledge on the processing and identification of features on hyperspectral remotely sensed data.  This lab will go through image spectrometry, hyperspectral images, and selected spectral processing basics.  Hyperspectral remote sensing is used when the user needs to have a more specific band with for looking at things like different type of plants.  That requires having very specific band widths. 

Methods:

This lab was done using the ENVI program.  An image was brought in and a few bands were examined specifically for a few predefined areas.  These areas areas looking at different types of minerals from around the area of interest.  The plots below show the spectral signature of each of these minerals. 



Next step is to compare the actual spectral signatures with signatures within the spectral library for each of these minerals. The image below shows the spectral signatures from the library to the signatures collected from the image.  It can be seen that the actual signatures have less fluctuation in the shorter bands.  This is most likely due to the signatures from the image having atmospheric scattering and absorption. 


In order to atmospherically correct this images a process called FLAASH will be used. This is a tool within ENVI.  The following images are different land cover areas and their spectral profiles before correction and after.

Vegetation
Urban

Water
It can be seen that the spectral profiles changes for these areas after atmospheric correction.  It gets to be more smooth and more accurate by correcting issues that were caused by scattering.

The last thing done in this lab was to use a tool called Vegetation Index Calculator.  This tool allows the user to show what areas have stressed vegetation and areas that do not.  Many different types of analysis can be done using the same process as the Vegetation Index Calculator, such as Fire Fuel, Ag Stress and Forest health.




Results:

The following image is the results of the forest stress tool.  The areas in red are areas that are stressed and areas in blue are not.  The blue areas on the left of the image are a forest and the area on the right is urban area.  That explains much of the reasons why those areas are stressed.

AgStress


Sources:

Cyril Wilson

Tuesday, April 17, 2018

Lab 8: Advanced Classifiers 2

Goal:

In the previous lab object based classification was used to get a higher classification accuracy than what was produced from the supervised and unsupervised classification techniques that were used earlier in the semester.  This lab will go over expert classifiers that are used to enhance the results of the previous classifications.




Methods:

To use an expert system classifier, the user will need to have a classified image they wish to classify more accurately.  To make a classified image more accurate the user can apply ancillary data to the image.  This can be done using the Knowledge Engineer tool in ERDAS Imagine.



The rules that are being set in the image basically say that if the classification in the image that is already classified is not actually what the classification says it is, give it a value of 1.  This will then change it to a different classification.  

The next step is to use the Neural Network as an expert classifier to create an image with a more accurate classification.  This will be done using the ENVI programs neural net tool.  Within the tool the image is input and parameters are set.  






Results:

This is the final image that was classified using the expert systems.  The output of the classification looks really good.  Most of the land cover areas are correctly classified on a qualitative level.  To see how accurate this image actually is an accuracy assessment would need to be done.








Sources:

Cyril Wilson

Wednesday, April 4, 2018

Lab 7: Object Based Classification

Goal:

In the past few labs pixel based classification was used to do supervised and unsupervised classification.  This lab will be exploring cutting edge object based classification.  This lab will use the eCognition software to do this.  Object based classification integrates both spectral and spatial information in order to classify objects. This lab will go through the process of doing two different methods of object based classification, random forest, and support vector.





Methods:

eCognition is a ground breaking software that is capable of very advanced classification.  This is the software that will be used in this lab for both support vector and random forest.

To begin the process the user needs to bring in a satellite image.  In this case a 2000 landsat image was used of the Eau Claire and Chippewa Valley area. The user then selects the band combination they wish to use, for this lab a 432 false color IR combination will be used.  The next step is segmentation and sample collection.  Segmentation is breaking the image up into small pieces, these are the objects.  The goal is for the objects is to contain only one land use/land cover class.  To create these objects the Process Tree tool needs to be used. In process tree, the user begins by naming the process and then what kind of process needs to be run.  The image below is what this looks like.



A few settings are very important when generating objects.  In this case multiresolution segmentation was used with a scale parameter of 9, shape of .3 and compactness of .5.  This creates objects on the image.  The image below is what the resulting objects look like.


The next step is to create classes. This is done by using Class > Class Heirarchy.  Once this tab is ipen the user simple right clicks and adds the classes by naming them and selecting the color. The next step is to collect samples for each of the classes.  This is similar to what is done when doing supervised classification. The user selects the class in their class heirarchy then double clicks an object that contains only the area they wish to include in the class.  The following table is the minimum amount of samples that should be taken for each class.


This is now the point that the process changes for random forest and support vector.  The first process that will be done is random tree.  To begin, random forest needs to be trained.  To do this a variable is created for the random forest (RF) classifier. This is done by creating another line in the process tree.  The following image is the parameters given to the classifier.



A few set of rules are also given to the classifier, these are given below.


The classifier is now ran.  The following image is the initial classified image.


A lot of the urban areas were not classified correctly so these areas were manually changed.

The next method is the support vector machines method. This method used a different algorithm when grouping pixels.  The only difference when doing this method changing the method to support vector machines.  The same objects and samples were used.  The results from both of these methods can be found in the results section.

The final image to be classified is a Unmanned Aerial Systems image.  This is an image taken from the University of Wisconsin Eau Claire drone fleet.  The process to classify this image is the same as doing satellite images.  In this case, the SVM method was used.  The only big difference for the UAS image was the scale parameter used.  The scale parameter used had to be put all the way up to 200 because the image is at such a large scale.  The resulting classified image can be seen in the results section.



Results:








Sources: