date: 2020-12-21T09:04:17Z pdf:PDFVersion: 1.5 pdf:docinfo:title: Beyond Measurement: Extracting Vegetation Height from High Resolution Imagery with Deep Learning xmp:CreatorTool: LaTeX with hyperref package access_permission:can_print_degraded: true subject: Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 11 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times. dc:format: application/pdf; version=1.5 pdf:docinfo:creator_tool: LaTeX with hyperref package access_permission:fill_in_form: true pdf:encrypted: false dc:title: Beyond Measurement: Extracting Vegetation Height from High Resolution Imagery with Deep Learning modified: 2020-12-21T09:04:17Z cp:subject: Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 11 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times. pdf:docinfo:subject: Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 11 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times. pdf:docinfo:creator: PTEX.Fullbanner: This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/W32TeX) kpathsea version 6.2.3 trapped: False meta:creation-date: 2020-11-19T10:05:05Z created: 2020-11-19T10:05:05Z access_permission:extract_for_accessibility: true Creation-Date: 2020-11-19T10:05:05Z producer: pdfTeX-1.40.18 pdf:docinfo:producer: pdfTeX-1.40.18 pdf:unmappedUnicodeCharsPerPage: 17 dc:description: Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 11 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times. Keywords: deep learning; artificial intelligence; vegetation surface modeling access_permission:modify_annotations: true description: Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 11 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times. dcterms:created: 2020-11-19T10:05:05Z Last-Modified: 2020-12-21T09:04:17Z dcterms:modified: 2020-12-21T09:04:17Z title: Beyond Measurement: Extracting Vegetation Height from High Resolution Imagery with Deep Learning xmpMM:DocumentID: uuid:7aa3ed9a-1029-4af6-9552-33d9958573e5 Last-Save-Date: 2020-12-21T09:04:17Z pdf:docinfo:keywords: deep learning; artificial intelligence; vegetation surface modeling pdf:docinfo:modified: 2020-12-21T09:04:17Z meta:save-date: 2020-12-21T09:04:17Z pdf:docinfo:custom:PTEX.Fullbanner: This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/W32TeX) kpathsea version 6.2.3 Content-Type: application/pdf X-Parsed-By: org.apache.tika.parser.DefaultParser dc:subject: deep learning; artificial intelligence; vegetation surface modeling access_permission:assemble_document: true xmpTPg:NPages: 22 pdf:charsPerPage: 3079 access_permission:extract_content: true access_permission:can_print: true pdf:docinfo:trapped: False meta:keyword: deep learning; artificial intelligence; vegetation surface modeling access_permission:can_modify: true pdf:docinfo:created: 2020-11-19T10:05:05Z