Tracking changes in the coastline and its appearance can help scientists monitor both conservation efforts and the effects of climate change. That's why Assistant Research Scientist Anthony Reisinger and his colleagues at Texas A&M's Harte Research Institute for Gulf of Mexico Studies analyze aerial imagery of the shoreline as part of their mission to develop science-drive solutions to Gulf of Mexico problems. So when the Texas state government needed a detailed environmental impact map to direct field crews after an event like an oil spill, Reisinger and his team a developed an Environmental Sensitivity Index (ESI) map: they manually traced out shorelines for the entire length of the Texas coast (nearly nine thousand miles), then coded it with ESI values that indicate how sensitive each section is to oil. But classifying those values required close scrutiny by experts who had spent many years in the field, so the researchers decided to see if they could automate the process by building their own domain-specific image classifier with Google Cloud's AutoML Vision, currently in beta.
Google's machine-learning tools help researchers assess and track environmental change
Environmental scientists at the Harte Research Institute for Gulf of Mexico Studies at Texas A&M - Corpus Christi use AutoML Vision to classify aerial images of coastal shoreline automatically.
"By moving to Google Cloud's AutoML Vision, we improved our model's accuracy, making it much easier to build custom image classification models on our own image data."Anthony Reisinger, Assistant Research Scientist, Harte Research Institute for Gulf of Mexico Studies, Texas A&M University - Corpus Christi
Coastal classifiers: using AutoML Vision to assess and track environmental change
Texas coast with cyan overlay of ESI shoreline
Rectified imagery overlaid with ESI shorelines and grid used to extract both imagery and shorelines.
Training models with more flexibility and greater precision
Google’s suite of Cloud AutoML products, including Natural Language and Translate as well as Vision, leverages state-of-the-art transfer learning and Neural Architecture Search technology to allow developers with limited machine-learning expertise to train high-quality models specific to their data and business needs. AutoML trains advanced models using your own data, lets you inspect your data and analyze the results via an intuitive user interface, and provides an API for scalable serving. The flexibility of AutoML Vision was important to Reisinger and his colleagues in customizing their own labels to automatically classify their images.
Reisinger's team experimented with training AutoML Vision on both single and multi-labeled datasets, where the aerial images of the shoreline are coded as one or more types. To generate the training image labels, Reisinger explained, “the direction of the camera and aircraft position were used to project a point to the closest shoreline, and each image was assigned one or more labels based on both the camera’s heading and the proximity of the shoreline to the plane's position.” For the single-label dataset, AutoML Vision automatically labelled each image with a dominant coastline type, but the visual interface allows users to see other possible coastline types, ranked by relevance, as well [see image] The single-label dataset resulted in a training model with an average precision of .844 with 84% precision and 78% recall. But the team noticed that the model often predicted coastline types that were actually present in the image, but did not match the single label. When you create an AutoML vision dataset, it gives you the option to “enable multi-label classification", and this feature made it straightforward to run a second set of experiments using the same images, but tagged with multiple labels per image, where possible. This multi-label dataset resulted in significantly more accurate training models than those built using the single-label dataset, with an average prediction rate of .952 with 91% precision and 90% recall rates.
An interface that's intuitive and easy to use
Reisinger is pleased with their results so far, and the tool’s ease of use is important to him: “AutoML Vision makes it easy to view evaluation results and metrics and to use your trained models for prediction. And now non-experts can assign ESI values to these shorelines we create.” The Evaluate tab allows easy visual inspection of the metrics for all images or the results for a given label, including its true positives, false positives, and false negatives. A slider adjusts the score threshold for classification—for all labels or for just a single label—to show how the precision-recall tradeoff curve changes in response. The Predict tab shows at a glance how a model is doing on a few images and thumbnail renderings of the images are convenient for quickly scanning each category. Then the REST API makes the model scalable, either programmatically or from the command line.
Reisinger and his team plan to create an updated version of the ESI shoreline dataset in the future and continue to use AutoML Vision to predict shoreline types on their most recent images. “By moving to Google Cloud’s AutoML Vision, Reisinger concludes, “we improved our model's accuracy, making it much easier to build custom image classification models on our own image data.”
Cloud Vision’s labels are too general for coastline classification.
Map showing single-label method used to join the aircraft’s position with the nearest ESI shoreline from the image on the left. Image was taken from the aircraft’s position in the map (Note: the projected point was assigned the value of the closest shoreline point to the plane/camera’s location; however, this photo contains multiple shoreline types).
Precision and recall metrics across all labels, using the single-label dataset.
AutoML Vision correctly predicts that this image contains gravel shell beaches, even though it wasn’t labeled as such.
Map illustrating the multi-label method of ESI shoreline label extraction using modeled field-of-view (FOV) of the camera and the image taken from the aircraft’s position on the map (Note: this method allows for the majority of the shorelines in the FOV to be assigned to this image).
Precision and recall metrics across all labels, for a model trained on the multi-label dataset.
This image’s multiple classifications were correctly predicted.
Viewing evaluation results for the “gravel_shell_beaches” label.
Viewing evaluation results for the “gravel_shell_beaches” label.
Predicting the classes of shoreline shown in a new image