Thursday, March 29, 2018

Lab 6: Digital Change Detection

Goals:

The goal of this lab is to explore digital change detection.  Digital change detection is comparing changes in landscape by comparing pixel information between two images.  This is typically done when comparing land use and land cover.  Digital change detection is an important tool for monitoring environmental and socioeconomic phenomena through remotely sensed data.




Methods:

Write Function Memory Insertion is a simple and powerful method of visualizing changes that occur in landscape over time. This is performed by stacking the red band from a newer image, in this case a 2011 Landsat image the Eau Claire area, with the red band and near infrared of an older image, in the case a 1991 image was used of the same area as the 2011 image.  These images are then stacked by using the layer stack tool in Erdas Imagine.  The user then sets the layer combination to highlight the areas that have changed between the two images.  The following image shows the images that were used for what color.



The next method of change detection is done by calculating quantitative changes in multidate classified images. The first step to doing this method is to take the amount of pixels in each land cover/land use class and convert them into hectares.  This is done by referencing the attribute table of the image and taking this histrogram pixel value for each class and converting this first to square meters by multiplying it by 900 the multiplying the square meters by .0001.  This gives the amount of hectares for each land cover/land use class. The following images show the attribute table in Erdas and the conversion table in excel.



From the final hectare value the percent change is calculated by taking the 2011 hectare value for each class and subtracting the 2001 value.  This is then divided by the 2011 value and multiplied by 100.  This gives the percent change from 2001 to 2011.  The following table shows the percent change for each class.


The next section of the lab goes through the process of doing change detection in the Milwaukee Metropolitan Area (MSA) for the Department of Natural Resources.  The DNR is interested in the changes in urban/built up areas, forest, wetland, open space and agricultural lands.  This process will do change detection over a 10 year period using the Wilson-Lula algorithm.  This algorithm does change detection from images from different dates for each land cover class to those it is most concerned with.  In this case, the following image shows what classes were compared to what.


To achieve this, the model builder in Erdas image in utilized.


The 2001 and 2011 are the two raster images at the top of the model.  These images are then ran through a function that extracts just the areas of the image that are the desired class.  This is done by using an either if statement.  For example, if the land cover class is the desired number that it was assigned, it will give it a value of 1 if it is not it will give it a value of 0.  This is what the first set of functions does.  These are all saved in temporary raster files.  The next set of functions is to show the areas that have changed.  This is done by using the & bitwise function with the two outputs from the first function.  The output is then saved as individual rasters.


Results:






Sources:

Cyril Wilson

Monday, March 12, 2018

Lab 5: Classification Accuracy Assessment

Goals:

The main objective of this lab is to introduce methods in which the accuracy of a classified image is determined.  Accuracy assessments will be done to classified images. Accuracy assessment is very important when publishing classified data.  If the image has poor accuracy it is not ethical to publish and should be classified in a more through manner.


Methods:
The accuracy assessment will be completed on the unsupervised image that was created in lab 3.  For more information please refer back to lab 3.  The first step to finding the accuracy of the classification is to collect reference samples.  This was done by using the accuracy assessment add random points tool in ERDAS.  The following image shows the settings used for generating random points.


In total 125 points were created and stratified random sample was used.  This means a certain number of points will be included in each of the classification classes.  This minimum was set to 15 points for each class.  The following image shows the random points on the reference image.



The next step was to go through and determine what land cover each point is in on the reference image.  This was done by showing ten points at a time and zooming in on them and filling out what category they are in on the accuracy assessment table.  The following image is of point number 1 and the corresponding table where the actual land cover class was recorded.  Once the classification was recorded, the point turns yellow so the user knows they already did that point.  The process was repeated for all 125 points.


The next step is to produce the accuracy report.  This is done by using the produce accuracy report tool in ERDAS.  This is simply a button in the accuracy assessment table that creates a report.  From this report, an error matrix and accuracy reports are transferred to an excel table to be more visually appealing.  The resulting tables will be displayed in the results section of this blog.

The last step to this lab was going through the same accuracy assessment process but with the supervised classification image created in lab 4.  A comparison of the accuracy's will be showcased in the results section.


Results:



The unsupervised image got a accuracy percentage of 48.80%.  This turns out to be very poor.  This is likely the result of using unsupervised classification with only 5 classes and maybe because this is the first time the user has done this.  There could have been error in both the classification portion of this process but also when conducting the accuracy assessment.  Mostly, the unsupervised classification method does not allow the user to customize their classes so they cannot train the computer on what pixels belong in what class.





The supervised image resulted in 62.69% accuracy.  This is still not good but it is better than the unsupervised image.  While conducting these it was easy to notice that there are many different shades of green and brown that many ag fields look like when not being used or have weak crop.  These light green brown colors often got confused with bare soil and urban classes.  To get a better result the classification would have to be redone with more meticulous choosing of samples when training the classifier.

Sources:

Cyril Wilson

Thursday, March 8, 2018

Lab 4: Pixel-Based Supervised Classification

Goal and Background:

The extraction of information from remotely sensed images is a very important aspect of the remote sensing discipline.  One method of extracting information is land cover classification.  In the previous lab unsupervised classification was used.  In this lab, supervised classification will be used.  Supervised classification involves more user input and oversight than unsupervised, as the name suggests.

Methods:

The first step in this method is to collect samples from the image to "train" the classifier.  The following image (figure 1) shows the amount of samples that should be collected from each class.

Now the user must collect spectral signatures.  This is done by using the signature editor and polygon tool in ERDAS.  The following image is what the polygon and its corresponding signature looks like in the signature editor.



In total, 50 signatures were collected from around the image.


To ensure the signatures are accurate each of the signatures should be examined in profile view.


Looking at the water signatures, they match those signatures of non-turbid water.  This is done for each group of spectral signatures.



The classes are then analyzed to make sure there are no bad samples.  This is done by examining the histograms, image alarm and the signature separability.  If bad or outlier signatures are found those should be deleted from the project and new signatures should be taken in it's place.  Once that is done the signatures are combined into their respective classes.  The following image shows the spectral signatures of the classes combined.



The next step is to actually perform the supervised classification, the classes have now been trained.  This is done using the supervised classification tool in ERDAS.  The user inputs their image and the classification scheme they just created.

Finally a visually appealing map was created in ArcMap with this final raster image.


Results:

The supervised classification method turned out okay except that there was much overlap between the forest and agricultural areas.  This is because the samples taken had a similar spectral signature in the first 3 bands.  There is also much overlap between the urban and bare soil areas.  This can be fixed using more advanced methods of classification.

The following image is the unsupervised classification that was created in lab 3.  The supervised and unsupervised classification resulted in very different results.  Most of the unsupervised map is agriculture and most of the supervised map is forest. 


In later labs, more advanced methods of classification will be explored that will fix some of the problems these simple methods of classification have.


Sources:

Cyril Wilson

Thursday, March 1, 2018

Lab 3: Unsupervised Classification

Goal and Background:

The goal of this lab is to perform unsupervised classification on a satellite image of the Chippewa and Eau Claire counties.  Land cover classification is an important aspect of remote sensing because it provides valuable information and can be done with relative ease with today's technology.  There are many different methods and they all have their ups and downs.  Their accuracy all varies.  Unsupervised classification is the first type of classification in this class because many consider it the most simple for the user.

Methods:

In this lab, the unsupervised ISODATA classification method was used.  To begin the user brings their image into ERDAS and opens the unsupervised classification tool.  This is where they set what kind of classification they want to do and other settings such as clustering options, iterations, and color schemes.  In this case, iterations was set to 250, convergence threshold was set to .95, and the approximate true color was set for the color scheme and classes was set to 10.  This means the classification will produce ten different land cover classes.  The tool is then ran and it produces a land cover classification with ten different classes.  The next step is for the user to go through the ten different classes and choose what category they fit into. 


This is done by highlighting the class is a color like gold, and assigning it a class name from the table above.  Once they are all categorized classes that should be merged are merged together.  Google earth pro was used in assistance to see what classes were what. 


This process was done again except 20 classes were used this time and a smaller convergence threshold was used.
 Results:


The resulting image from ISODATA unsupervised classification was not great.  Just looking at it it gives a general idea of what the area looks like but it really is not accurate.  There are many agriculture areas that should be forest and vice versa.  This is also the case for urban, bare soil, and ag areas.  On the middle eastern part of the image, there are many areas that were classified as urban that are really agriculture areas that did not have crops on them at the time the image was taken.  This brown looking area was then classified as urban because its signature is similar to that of bare soil. 

Unsupervised classification does not seem like the best way to classify an image.  It might be possible with many many classes that an accurate classification can be done.  20 classes are not enough to have an accurate classification. 


Sources:

Cyril Wilson