pongfactory

my journal rs

May 7th, 2017
57
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
C 8.71 KB | None | 0 0
  1. Fig 1 = Illustration of our deep learning and pre, post processing pipeline for road segmentation on aerial and THEOS satellite data sets.
  2. Fig 2 = Our adapted SegNet architecture.
  3. Fig 3 = Illustration of shape index scores on the extracted (extracted roads with removing noises, which are objects whose shape index is less than 1.25 (this parameter can be obtained from an experiment on a validation data set)).
  4. Fig 4 = Sample difference three of five satellite imagery from THEOS data sets. \textbf{(a)} Sample satellite image from Nakhonpathom data set and a binary map which is a ground truth image denoting the location of roads; \textbf{(b)} Sample satellite image from Chonburi data set and a binary map which is a ground truth image denoting the location of roads; \textbf{(c)} Sample satellite image from Songkhla data set and a binary map which is a ground truth image denoting the location of roads.
  5. Fig 5 = Sample difference two of five satellite imagery from THEOS data sets. \textbf{(a)} Sample satellite image from Surin data set and a binary map which is a ground truth image denoting the location of roads; \textbf{(b)} Sample satellite image from Ubonratchathani data set and a binary map which is a ground truth image denoting the location of roads.
  6. Fig 6 = Sample aerial image from Massachusetts road data set \textbf{(a)} and a binary map which is a ground truth image; \textbf{(b)} denoting the location of roads.
  7. Fig 7 = \textbf{(a)} Plot of model loss on training and validation data sets (Portion of the cross entropy loss function based on the softmax a classifier for the ELU-SegNet-LM-CRFs architecture, by 50 epochs), and \textbf{(b)} Performance plot of the selected the best deep learning model with 14 inputs showing F1 score trend during the learning procedure for validation data set.
  8. Fig 8 = Road extraction of an aerial image (Massachusett data set). \textbf{(a)} input image (Original aerial image of 1 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach; \textbf{(d)} extracted roads with ELU-SegNet-LM approach; and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  9. Fig 9 = An example of the resulting road extraction of a satellite image (Nakhonpathom data set). \textbf{(a)} input image (Original satellite image of 2 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach; \textbf{(d)} extracted roads with ELU-SegNet-LM approach; and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  10. Fig 10 = An example of the resulting road extraction of a satellite image (Chonburi data set). \textbf{(a)} input image (Original satellite image of 2 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach; \textbf{(d)} extracted roads with ELU-SegNet-LM approach; and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  11. Fig 11= An example of the resulting road extraction of a satellite image (Songkhla data set). \textbf{(a)} input image (Original satellite image of 2 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach \textbf{(d)} extracted roads with ELU-SegNet-LM approach, and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  12. Fig 12 = An example of the resulting road extraction of a satellite image (Surin data set). \textbf{(a)} input image (Original satellite image of 2 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach; \textbf{(d)} extracted roads with ELU-SegNet-LM approach; and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  13. Fig 13 = An example of the resulting road extraction of a satellite image (Ubonratchathani data set). \textbf{(a)} input image (Original satellite image of 2 m/pixel); \textbf{(b)} target road map; \textbf{(c)} extracted roads with ELU-SegNet approach \textbf{(d)} extracted roads with ELU-SegNet-LM approach; and \textbf{(e)} extracted roads with ELU-SegNet approach-LM-CRFs.
  14.  
  15. Table 1 = Variations of the proposed deep learning methods.
  16. Table 2 = A comparison between our proposed methods and baselines in terms of precision, recall, and F1 (Road segmentation results and evaluation metrics calculated on the Massachusetts test set).
  17. Table 3 = A comparison between our proposed methods and baselines in terms of a precision score (Road segmentation results and evaluation metrics calculated on the THEOS test set that was comprised of Nakhonpathom, Chonburi, Songkhla, Surin and Ubonratchathani data sets).
  18. Table 4 = A comparison between our proposed methods and baselines in terms of recall score (Road segmentation results and evaluation metrics calculated on the THEOS test set that was comprised of Nakhonpathom, Chonburi, Songkhla, Surin and Ubonratchathani data sets).
  19. Table 5 = A comparison between our proposed methods and baselines in terms of the F1 score (Road segmentation results and evaluation metrics calculated on the THEOS test set that was comprised of Nakhonpathom, Chonburi, Songkhla, Surin and Ubonratchathani data sets).
  20.  
  21. Equa 1 =
  22. Equa 2 =
  23. Equa 3 =
  24. Equa 4 =
  25. Equa 5 =
  26. Equa 6 =
  27.  
  28. Table 2. Semantic segmentation results on the Potsdam dataset (F1 scores and overall accuracy (OA)).
  29. An example of the resulting predicted label image.
  30. Figure 5. Segmentation results (top row: NZAM/ONERA Christchurch, bottom row: ISPRS Potsdam).
  31. (a) RGB image, (b) Ground truth, (c) SegNet prediction. Legend: white: impervious surfaces; blue:
  32. buildings; cyan: low vegetation; green: trees; yellow: vehicles; read: clutter; black: undefined.
  33. Table 4. Instance segmentation and vehicle detection results for different morphological preprocessing
  34. (mean intersection over union (mIoU), precision and recall).
  35. Table 5. Vehicle detection results on the ISPRS Potsdam and NZAM/ONERA datasets.
  36. Figure 7. Performance plot of the selected deep learning model with 14 inputs showing F1 score trend during the learning procedure for validation data set.
  37.  
  38. Datasets
  39. Due to the lack of aerial image datasets that are suitable for evaluating machine learning methods, we constructed several large and challenging datasets using aerial images of the Greater Toronto Area. We used these datasets for all experiments performed in Chapters 3, 4, and 5. Unfortunately, the imagery used to construct these datasets is not publicly available because we were not aware of several repositories of free high-resolution aerial imagery at the time. Chapter 6 will present the first large-scale publicly available datasets for road and building detection along with benchmarks of the most promising models developed in this thesis. We now describe the characteristics of the proprietary datasets used in the next three chapters.
  40.  
  41. Figure 3.3: Three sample input patches from the Toronto Roads data set. The red square denotes the region for which labels are predicted.
  42.  
  43. Figure 3.6: A comparison of precision-recall curves for deep architectures. (a) A comparison of different choices for the third layer of a deep architecture. (b). A comparison of three, four and five layer networks.
  44.  
  45. Figure 3.9: Visualizations of neural network predictions on road and building detection tasks. Green pixels are true positives, red pixels are false positives, blue pixels are false negatives, and background pixels are true negatives. Figures 3.9(a) and3.9(b) show predictions on the Toronto Roads test set. Figures 3.9(c) and 3.9(d) are predictions on the GTA Buildings test set.
  46.  
  47. The Massachusetts Roads Dataset consists of 1171 aerial images of the state of Massachusetts.As with the building data, each image is 1500x1500 pixels in size, the coveringan area of 2:25 square kilometers. We randomly split the data into a training set of1108 images, a validation set of 14 images and a test set of 49 images. The dataset covers a wide variety of urban, suburban, and rural regions and covers an area of over2600 square kilometers. With the test set alone covering over 110 square kilometers, this is by far the largest and most challenging aerial image labeling dataset. Figures6.2(a) and 6.2(b) show two representative regions from the Massachusetts Roadsdataset.The target maps were generated by rasterizing road centerlines obtained from the OpenStreetMap project. We used a line thickness of 7 pixels and no smoothingbecause, as we discovered in Chapter 4, using hard binary labels for training worksbetter than using soft binary labels.
  48.  
  49. Figure 6.2: Figures 6.2(a) and 6.2(b) show two representative regions from the Massachusetts Roads dataset. Figures 6.2(c) and 6.2(d) show predictions of a post processing network on these regions.
  50.  
  51. Future work: Moreover, context in an image is paramount: labels of adjacent pixelsare known to be highly dependent deriving from the spatial structure of theimages, and this needs to be incorporated into the classifier.
Add Comment
Please, Sign In to add comment