<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[WHOI's Autonomous Robotics and Perception Laboratory]]></title><description><![CDATA[[science and systems for autonomous marine ecosystem monitoring]]]></description><link>https://warp.whoi.edu/</link><generator>Ghost 3.42</generator><lastBuildDate>Fri, 27 Feb 2026 01:43:27 GMT</lastBuildDate><atom:link href="https://warp.whoi.edu/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[WHOI Caribbean Reef Fish Detection Dataset]]></title><description><![CDATA[<p>Levi Cai*, Austin Greene*, Nadège Aoki, Sierra Jarriel, Daniel Yang, T. Aran Mooney,  Yogesh Girdhar</p><p>Massachusetts Institute of Technology, Woods Hole Oceanographic Institution</p><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/example_labelled_video-ezgif.com-optimize.gif" class="kg-image" alt></figure><p>We introduce the WHOI Reef Solutions Initiative dataset for detection and classification of Caribbean Reef Fish. </p><p>The dataset consists of 162 clips, each 30sec long at 3fps,</p>]]></description><link>https://warp.whoi.edu/caribbean-reef-fish-detection/</link><guid isPermaLink="false">6700488cd8f498fd05ae6738</guid><dc:creator><![CDATA[Levi Cai]]></dc:creator><pubDate>Fri, 04 Oct 2024 20:03:27 GMT</pubDate><content:encoded><![CDATA[<p>Levi Cai*, Austin Greene*, Nadège Aoki, Sierra Jarriel, Daniel Yang, T. Aran Mooney,  Yogesh Girdhar</p><p>Massachusetts Institute of Technology, Woods Hole Oceanographic Institution</p><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/example_labelled_video-ezgif.com-optimize.gif" class="kg-image" alt></figure><p>We introduce the WHOI Reef Solutions Initiative dataset for detection and classification of Caribbean Reef Fish. </p><p>The dataset consists of 162 clips, each 30sec long at 3fps, extracted from diver transects aimed at estimating fish abundance. Each frame is labelled with fish/no-fish bounding box annotations, along with tracking for every object. Finally, a subset of those tracks have been labelled to the species-level.</p><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/example_labelled_video_2-ezgif.com-optimize.gif" class="kg-image" alt></figure><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/example_labelled_video_tracks-ezgif.com-optimize.gif" class="kg-image" alt></figure><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/image-1.png" class="kg-image" alt srcset="http://warp.whoi.edu/content/images/size/w600/2024/10/image-1.png 600w, http://warp.whoi.edu/content/images/size/w1000/2024/10/image-1.png 1000w, http://warp.whoi.edu/content/images/size/w1600/2024/10/image-1.png 1600w, http://warp.whoi.edu/content/images/size/w2400/2024/10/image-1.png 2400w" sizes="(min-width: 720px) 720px"></figure><p></p><p></p><p></p><figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/10/image-3.png" class="kg-image" alt srcset="http://warp.whoi.edu/content/images/size/w600/2024/10/image-3.png 600w, http://warp.whoi.edu/content/images/size/w1000/2024/10/image-3.png 1000w, http://warp.whoi.edu/content/images/size/w1600/2024/10/image-3.png 1600w, http://warp.whoi.edu/content/images/size/w2400/2024/10/image-3.png 2400w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/09/teaser.png" class="kg-image" alt srcset="http://warp.whoi.edu/content/images/size/w600/2024/09/teaser.png 600w, http://warp.whoi.edu/content/images/size/w1000/2024/09/teaser.png 1000w, http://warp.whoi.edu/content/images/size/w1600/2024/09/teaser.png 1600w, http://warp.whoi.edu/content/images/size/w2400/2024/09/teaser.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields. Underwater scenes are challenging visual environments, as rendering through a medium such as water introduces both range and color dependent effects on image capture. We constrain 3D Gaussian Splatting (3DGS), a</p>]]></description><link>https://warp.whoi.edu/seasplat/</link><guid isPermaLink="false">66f98ad7d8f498fd05ae6721</guid><category><![CDATA[robotics]]></category><category><![CDATA[vis]]></category><category><![CDATA[Project: Curious Robot for Ecosystem Exporation]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Sun, 29 Sep 2024 17:27:51 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2024/09/teaser-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/09/teaser.png" class="kg-image" alt="SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model" srcset="http://warp.whoi.edu/content/images/size/w600/2024/09/teaser.png 600w, http://warp.whoi.edu/content/images/size/w1000/2024/09/teaser.png 1000w, http://warp.whoi.edu/content/images/size/w1600/2024/09/teaser.png 1600w, http://warp.whoi.edu/content/images/size/w2400/2024/09/teaser.png 2400w" sizes="(min-width: 720px) 720px"></figure><img src="http://warp.whoi.edu/content/images/2024/09/teaser-1.png" alt="SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model"><p>We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields. Underwater scenes are challenging visual environments, as rendering through a medium such as water introduces both range and color dependent effects on image capture. We constrain 3D Gaussian Splatting (3DGS), a recent advance in radiance fields enabling rapid training and real-time rendering of full 3D scenes, with a physically grounded underwater image formation model. Applying SeaSplat to the real-world scenes from SeaThru-NeRF dataset, a scene collected by an underwater vehicle in the US Virgin Islands, and simulation-degraded real-world scenes, not only do we see increased quantitative performance on rendering novel viewpoints from the scene with the medium present, but are also able to recover the underlying true color of the scene and restore renders to be without the presence of the intervening medium. We show that the underwater image formation helps learn scene structure, with better depth maps, as well as show that our improvements maintain the significant computational improvements afforded by leveraging a 3D Gaussian representation.</p><p>Click below for more details.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://seasplat.github.io/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model</div><div class="kg-bookmark-description">Website for paper SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://seasplat.github.io/static/images/Robot-icon.svg" alt="SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model"><span class="kg-bookmark-publisher">Daniel Yang</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://seasplat.github.io/static/images/teaser.png" alt="SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Workshop: Robots for Understanding Natural Ecosystems]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/02/runecollage2-1.jpg" class="kg-image"></figure><p>We are excited to announce a new workshop titled "Robots for Understanding Natural Ecosystems" that we're hosting at the upcoming ICRA 2024, in Yokohama, Japan on May 17th! The workshop invites <strong>roboticists who work in ecology</strong> or <strong>ecologists who work with robots</strong> to come and share their findings and discuss</p>]]></description><link>https://warp.whoi.edu/icra-rune/</link><guid isPermaLink="false">65dd01055fa9e309916ca92a</guid><category><![CDATA[ecology]]></category><category><![CDATA[robotics]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Mon, 26 Feb 2024 21:26:42 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2024/02/runecollage2.jpg" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2024/02/runecollage2-1.jpg" class="kg-image" alt="Workshop: Robots for Understanding Natural Ecosystems"></figure><img src="http://warp.whoi.edu/content/images/2024/02/runecollage2.jpg" alt="Workshop: Robots for Understanding Natural Ecosystems"><p>We are excited to announce a new workshop titled "Robots for Understanding Natural Ecosystems" that we're hosting at the upcoming ICRA 2024, in Yokohama, Japan on May 17th! The workshop invites <strong>roboticists who work in ecology</strong> or <strong>ecologists who work with robots</strong> to come and share their findings and discuss the future of these technologies and their role in ecology and conservation topics! <a href="http://warp.whoi.edu/rune" rel="noopener noreferrer">https://warp.whoi.edu/rune</a> </p>]]></content:encoded></item><item><title><![CDATA[DeepSeeColor]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/vpge92KRU1M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="DeepSeeColor: A realtime algorithm for enhancing underwater imagery"></iframe></figure><p>Successful applications of complex vision-based behaviours underwater have lagged behind progress in terrestrial and aerial domains. This is largely due to the degraded image quality resulting from the physical phenomena involved in underwater image formation. <em>Spectrally-selective light attenuation</em> drains some colors from underwater images while <em>backscattering</em> adds others, making it</p>]]></description><link>https://warp.whoi.edu/deepseecolor/</link><guid isPermaLink="false">6408f14bd6b78c09c569994f</guid><category><![CDATA[publication]]></category><category><![CDATA[Project: Curious Robot for Ecosystem Exporation]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Wed, 08 Mar 2023 20:44:03 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2023/03/alternative_fig_2.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/vpge92KRU1M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="DeepSeeColor: A realtime algorithm for enhancing underwater imagery"></iframe></figure><img src="http://warp.whoi.edu/content/images/2023/03/alternative_fig_2.png" alt="DeepSeeColor"><p>Successful applications of complex vision-based behaviours underwater have lagged behind progress in terrestrial and aerial domains. This is largely due to the degraded image quality resulting from the physical phenomena involved in underwater image formation. <em>Spectrally-selective light attenuation</em> drains some colors from underwater images while <em>backscattering</em> adds others, making it challenging to perform vision-based tasks underwater.  State-of-the-art methods for underwater color correction optimize the parameters of image formation models to restore the full spectrum of color to underwater imagery.  However, these methods have high computational complexity that is unfavourable for realtime use by autonomous underwater vehicles (AUVs), as a result of having been primarily designed for offline color correction.  Here, we present <em><strong>DeepSeeColor</strong></em>, a novel algorithm that combines a state-of-the-art underwater image formation model with the computational efficiency of deep learning frameworks.  In our experiments, we show that DeepSeeColor offers comparable performance to the popular "Sea-Thru" algorithm [Akkaynak et al. 2019] while being able to rapidly process images at up to 60Hz, thus making it suitable for use onboard AUVs as a preprocessing step to enable more robust vision-based behaviours.</p><p><a href="https://arxiv.org/abs/2303.04025">Jamieson, Stewart, Jonathan P. How, and Yogesh Girdhar. “DeepSeeColor: Realtime Adaptive Color Correction for Autonomous Underwater Vehicles via Deep Learning Methods.” In <em>IEEE International Conference on Robotics and Automation</em>, 2023.</a></p><p>We are pleased to have the DeepSeeColor source code and data available at: <a href="https://github.com/warplab/DeepSeeColor">https://github.com/warplab/DeepSeeColor</a></p>]]></content:encoded></item><item><title><![CDATA[CUREE: A Curious Robot for Ecosystem Exploration]]></title><description><![CDATA[<p>Coral reefs worldwide are threatened by anthropogenic disturbances and climate change. New tools are needed to scale up monitoring of coral reefs to understand reef ecosystems, rapidly assess biodiversity, and measure the efficacy of interventions. This interdisciplinary project will address this need by creating an autonomous robotic system that can</p>]]></description><link>https://warp.whoi.edu/curee/</link><guid isPermaLink="false">640756f9d6b78c09c569991c</guid><category><![CDATA[Projects]]></category><category><![CDATA[Project: Curious Robot for Ecosystem Exporation]]></category><category><![CDATA[Co-Robotic Exploration]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Tue, 07 Mar 2023 15:29:40 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2023/03/curee-hydrophone-glamour-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://warp.whoi.edu/content/images/2023/03/curee-hydrophone-glamour-2.jpg" alt="CUREE: A Curious Robot for Ecosystem Exploration"><p>Coral reefs worldwide are threatened by anthropogenic disturbances and climate change. New tools are needed to scale up monitoring of coral reefs to understand reef ecosystems, rapidly assess biodiversity, and measure the efficacy of interventions. This interdisciplinary project will address this need by creating an autonomous robotic system that can navigate a complex ecosystem and intelligently sample its environment to estimate local biodiversity and ecosystem health.</p><p>CUREE is a robot designed to explore underwater ecosystems, observe complex interactions between the organisms that live there and their habitats, and use these observations in real-time to adapt its behavior as an intelligent partner for marine science.  As a compact system designed to be deployed and operated by teams as small as a single person, CUREE can be taken anywhere in the world in checked luggage on commercial airlines and deployed without a need for significant supporting infrastructure.  In experiments in the U.S. Virgin Islands, we demonstrated how CUREE can be used to study coral reefs, by combining audio and visual observations of a coral reef to infer the preferred habitat of snapping shrimp, or by tracking a barracuda as it hunts above a reef.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/7gVOVtbJ_kM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="WARPLab&#39;s Curious Underwater Robot for Ecosystem Exploration (CUREE)"></iframe></figure><!--kg-card-begin: html--><div class="csl-entry">Girdhar, Y., McGuire, N., Cai, L., Jamieson, S., McCammon, S., Claus, B., San Soucie, J. E., Todd, J. E., &#38; Mooney, T. A. (2023). CUREE: A Curious Underwater Robot for Ecosystem Exploration. <i>IEEE International Conference on Robotics and Automation</i>.</div>  
<a href="https://aps.arxiv.org/abs/2303.03943">ArXiv preprint</a><!--kg-card-end: html--><!--kg-card-begin: markdown--><p>**<a href="http://warp.whoi.edu/tag/ecocurious/">Project Updates</a> **</p>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><img src="https://www.nsf.gov/images/logos/NSF_4-Color_bitmap_Logo.png" width="200/" alt="CUREE: A Curious Robot for Ecosystem Exploration"> This project is funded by NSF award #2133029<!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Semi-supervised Visual Tracking of Marine Animals Using Autonomous Underwater Vehicles]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://drive.google.com/drive/folders/18fknmUjD4aq3-Qktn-rLVIaNEEujV3wK">VMAT Dataset link</a></p>
<p><a href="https://drive.google.com/drive/folders/1iAIz6oaAaeNlxSWup3xElViLzmjxcwa-?usp=sharing">Supplementary videos link</a></p>
<!--kg-card-end: markdown--><p>In-situ visual observations of marine organisms is crucial to developing behavioural understandings and their relations to their surrounding ecosystem. Typically, these observations are collected via divers, tags, and remotely-operated or human-piloted vehicles. Recently, however, autonomous underwater vehicles equipped with cameras and embedded computers with</p>]]></description><link>https://warp.whoi.edu/vmat/</link><guid isPermaLink="false">63ffc69ed6b78c09c56998cf</guid><category><![CDATA[publication]]></category><category><![CDATA[Project: Curious Robot for Ecosystem Exporation]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Wed, 01 Mar 2023 21:46:22 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2023/03/barry-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="http://warp.whoi.edu/content/images/2023/03/barry-1.jpg" alt="Semi-supervised Visual Tracking of Marine Animals Using Autonomous Underwater Vehicles"><p><a href="https://drive.google.com/drive/folders/18fknmUjD4aq3-Qktn-rLVIaNEEujV3wK">VMAT Dataset link</a></p>
<p><a href="https://drive.google.com/drive/folders/1iAIz6oaAaeNlxSWup3xElViLzmjxcwa-?usp=sharing">Supplementary videos link</a></p>
<!--kg-card-end: markdown--><p>In-situ visual observations of marine organisms is crucial to developing behavioural understandings and their relations to their surrounding ecosystem. Typically, these observations are collected via divers, tags, and remotely-operated or human-piloted vehicles. Recently, however, autonomous underwater vehicles equipped with cameras and embedded computers with GPU capabilities are being developed for a variety of applications, and in particular, can be used to supplement these existing data collection mechanisms where human operation or tags are more difficult. Existing approaches have focused on using fully-supervised tracking methods, but labelled data for many underwater species are severely lacking. Semi-supervised trackers may offer alternative tracking solutions because they require less data than fully-supervised counterparts. However, because there are not existing realistic underwater tracking datasets, the performance of semi-supervised tracking algorithms in the marine domain is not well understood. To better evaluate their performance and utility, in this paper we provide (1) a novel dataset specific to marine animals, (2) an evaluation of state-of-the-art semi-supervised algorithms in the context of underwater animal tracking, and (3) an evaluation of real-world performance through demonstrations using a semi-supervised algorithm on-board an autonomous underwater vehicle to track marine animals in the wild.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/s_PBaYIqNKg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Tracking a barracuda using CUREE"></iframe></figure><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/zRklilGVHog?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Tracking a jack using CUREE"></iframe></figure><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/1C5woPnRh5M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Tracking a jellyfish using CUREE"></iframe></figure><!--kg-card-begin: html--><div class="csl-entry">Cai, L., McGuire, N. E., Hanlon, R., Mooney, T. A., &#38; Girdhar, Y. (2023). Semi-supervised Visual Tracking of Marine Animals Using Autonomous Underwater Vehicles. <i>International Journal of Computer Vision</i>. <a href="https://doi.org/10.1007/s11263-023-01762-5">https://doi.org/10.1007/s11263-023-01762-5</a></div><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[A day in the field, testing CUREE]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/IuA2KQixwbM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen title="WARPLab field testing its coral reef monitoring underwater robot in USVI"></iframe></figure><!--kg-card-begin: markdown--><p>WARPLab is in St. Johns, USVI from July 25 - Aug 7 2022. The focus of this op is to test new WARPAUV capabilities including audio-visual benthic surveys of coral reefs and animal tracking in visually complex environments typically found in coral reefs.</p>
<!--kg-card-end: markdown-->]]></description><link>https://warp.whoi.edu/a-day-in-the-field/</link><guid isPermaLink="false">62ed1ae5bd96d40a6c44ffd3</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[ecology]]></category><category><![CDATA[sound]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Fri, 05 Aug 2022 13:33:41 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2022/08/warpauv_joelshoal.jpg" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/IuA2KQixwbM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen title="WARPLab field testing its coral reef monitoring underwater robot in USVI"></iframe></figure><!--kg-card-begin: markdown--><img src="http://warp.whoi.edu/content/images/2022/08/warpauv_joelshoal.jpg" alt="A day in the field, testing CUREE"><p>WARPLab is in St. Johns, USVI from July 25 - Aug 7 2022. The focus of this op is to test new WARPAUV capabilities including audio-visual benthic surveys of coral reefs and animal tracking in visually complex environments typically found in coral reefs.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Field Trip to St. John, US Virgin Islands]]></title><description><![CDATA[<p></p><p>We conducted robot field trials in USVI from Oct 17-Oct 30, 2021. The goal of the mission was to test our new robots, new robot behaviors, deploy acoustic sensors, and collect samples, all aimed at improving our capability to monitor coral reefs and understanding the impact of climate change. Our</p>]]></description><link>https://warp.whoi.edu/2021-usvi-fieldtrials/</link><guid isPermaLink="false">616d36468fcddf0786de3b31</guid><category><![CDATA[expedition]]></category><category><![CDATA[Co-Robotic Exploration]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Mon, 18 Oct 2021 09:32:16 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2021/10/Screen-Shot-2021-10-18-at-5.18.12-AM-1.png" medium="image"/><content:encoded><![CDATA[<img src="http://warp.whoi.edu/content/images/2021/10/Screen-Shot-2021-10-18-at-5.18.12-AM-1.png" alt="Field Trip to St. John, US Virgin Islands"><p></p><p>We conducted robot field trials in USVI from Oct 17-Oct 30, 2021. The goal of the mission was to test our new robots, new robot behaviors, deploy acoustic sensors, and collect samples, all aimed at improving our capability to monitor coral reefs and understanding the impact of climate change. Our team is a diverse mix of  roboticists, biologists, and ecologists, all united over the common theme saving the reefs. </p><p><br></p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/10/PXL_20211017_175021840.jpg" width="3840" height="2160" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/10/PXL_20211017_193425904.jpg" width="3840" height="2160" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/10/PXL_20211017_193955742.jpg" width="3840" height="2160" alt="Field Trip to St. John, US Virgin Islands"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/10/Screen-Shot-2021-10-18-at-5.18.12-AM.png" width="1346" height="968" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/10/PXL_20211017_233752850.jpg" width="3840" height="2160" alt="Field Trip to St. John, US Virgin Islands"></div></div></div><figcaption>Our field site was located at Lameshur Bay in St. Johns. Getting to it required changing two planes, renting cars, and taking a ferry.</figcaption></figure><p></p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC00971.JPG" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>WARPASV is the surface robot we use connect to the WARPAUV. Here it is being tested in Lameshur Bay.</figcaption></figure><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/12/DSC00985-1.JPG" width="3888" height="2592" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01006-1.JPG" width="3888" height="2592" alt="Field Trip to St. John, US Virgin Islands"></div></div></div><figcaption>We used the Virgin Island Environmental Resource Stations (VIERS) dock to do quick robot testing, launch all our small boats, and do diver testing.</figcaption></figure><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01111.JPG" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>WARPAUV exploring a coral reef</figcaption></figure><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01062.JPG" width="3888" height="2592" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01094.JPG" width="3888" height="2592" alt="Field Trip to St. John, US Virgin Islands"></div><div class="kg-gallery-image"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01517.JPG" width="3888" height="2592" alt="Field Trip to St. John, US Virgin Islands"></div></div></div></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01431.JPG" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>We deployed a hydrophone array and camera system to take long term acoustic and visual observations.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC_4743.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>Jessica and Levi running robot ops from a small boat.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC_4788.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>Dan, Yogi and Aran getting ready for a dive.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC_5271.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>Nathan and Yogi deploying the robot from a boat, with John in the water.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/20211023_134850.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>Nathan deploying the robot off the dock.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/20211023_131925.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>John and Seth ballasting the VASE sensor.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC_5091-1.jpg" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>Stewart and Nathan planning their WARPAUV and WARPASV deployment.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01501.JPG" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>WARPLab team</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2021/12/DSC01576.JPG" class="kg-image" alt="Field Trip to St. John, US Virgin Islands"><figcaption>The full team: (top) Nathan McGuire, John San Soucie, Stewart Jamieson, Levi Cai. (botton) Dan Yang, Nadege Aoki, Cynthia Becker, Jessica Todd, Aran Mooney, Yogi Girdhar, Justin Ossolinski</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Marine Animal Tracking Results]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2021/12/image.png" class="kg-image"></figure><p>We presented a novel marine animal tracking dataset with evaluations of semi-supervised trackers as a poster, "Evaluation of Semi-supervised Methods for In-situ, Visual Tracking of Marine Animals", at the CV4Animals Workshop at CVPR 2021. </p><p>The workshop details are here: <a href="https://www.cv4animals.com/paper">https://www.cv4animals.com/paper</a></p><p>The full poster is here: <a href="https://drive.google.com/file/d/1lXYhvzBBV5cjCabB9OGMUzece_GnE0Hp/view">https:</a></p>]]></description><link>https://warp.whoi.edu/marine-animal-tracking-results/</link><guid isPermaLink="false">60d214f68fcddf0786de3b22</guid><dc:creator><![CDATA[Levi Cai]]></dc:creator><pubDate>Tue, 22 Jun 2021 17:58:14 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2021/12/cv4animals_poster_2021.PNG" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="http://warp.whoi.edu/content/images/2021/12/image.png" class="kg-image" alt="Marine Animal Tracking Results"></figure><img src="http://warp.whoi.edu/content/images/2021/12/cv4animals_poster_2021.PNG" alt="Marine Animal Tracking Results"><p>We presented a novel marine animal tracking dataset with evaluations of semi-supervised trackers as a poster, "Evaluation of Semi-supervised Methods for In-situ, Visual Tracking of Marine Animals", at the CV4Animals Workshop at CVPR 2021. </p><p>The workshop details are here: <a href="https://www.cv4animals.com/paper">https://www.cv4animals.com/paper</a></p><p>The full poster is here: <a href="https://drive.google.com/file/d/1lXYhvzBBV5cjCabB9OGMUzece_GnE0Hp/view">https://drive.google.com/file/d/1lXYhvzBBV5cjCabB9OGMUzece_GnE0Hp/view</a></p><p>Results of the trackers can be found here: <a href="https://drive.google.com/drive/folders/1Or2B4Yv6uwMgAFzvOHCA9giwTodXM977">https://drive.google.com/drive/folders/1Or2B4Yv6uwMgAFzvOHCA9giwTodXM977</a></p>]]></content:encoded></item><item><title><![CDATA[Gaussian-Dirichlet Random Fields for Inference over High Dimensional Categorical Observations]]></title><description><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2020/03/gdrf_overview.png" class="kg-image"><figcaption>Categorical observations, such as observations of phytoplankton taxa, are factored into the product of a community model and spatiotemporal distributions for each community. The community model, which is the distribution of taxa in each community, is modeled with a Dirichlet prior; and the spatial distribution of each community is modeled</figcaption></figure>]]></description><link>https://warp.whoi.edu/gaussian-dirichlet-random-fields/</link><guid isPermaLink="false">5e79e826bcddc10754c56a0a</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[papers]]></category><category><![CDATA[ecology]]></category><category><![CDATA[topic model]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Tue, 24 Mar 2020 11:13:14 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2020/03/gdrf_overview-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2020/03/gdrf_overview.png" class="kg-image" alt="Gaussian-Dirichlet Random Fields for Inference over High Dimensional Categorical Observations"><figcaption>Categorical observations, such as observations of phytoplankton taxa, are factored into the product of a community model and spatiotemporal distributions for each community. The community model, which is the distribution of taxa in each community, is modeled with a Dirichlet prior; and the spatial distribution of each community is modeled using a Gaussian process.</figcaption></figure><img src="http://warp.whoi.edu/content/images/2020/03/gdrf_overview-1.png" alt="Gaussian-Dirichlet Random Fields for Inference over High Dimensional Categorical Observations"><p>We propose a generative model for the spatio-temporal distribution of high dimensional categorical observations. These are commonly produced by robots equipped with an imaging sensor such as a camera, paired with an image classifier, potentially producing observations over thousands of categories. The proposed approach combines the use of Dirichlet distributions to model sparse co-occurrence relations between the observed categories using a latent variable, and Gaussian processes to model the latent variable's spatio-temporal distribution. Experiments in this paper show that the resulting model is able to efficiently and accurately approximate the temporal distribution of high dimensional categorical measurements such as taxonomic observations of microscopic organisms in the ocean, even in unobserved (held out) locations, far from other samples. This work's primary motivation is to enable deployment of informative path planning techniques over high dimensional categorical fields, which until now have been limited to scalar or low dimensional vector observations.</p><p>Soucie, J. S., Sosik, H., &amp; Girdhar, Y. (2020). Gaussian-Dirichlet Random Fields for Inference over High Dimensional Categorical Observations. [To appear in] <em>International Conference on Robotics and Automation (ICRA)</em>.<br>[<a href="https://arxiv.org/abs/2003.12120">ArXiv Preprint</a>]</p>]]></content:encoded></item><item><title><![CDATA[Active Reward Learning for Co-Robotic Vision-Based Exploration in Bandwidth Limited Environments]]></title><description><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2020/03/overview_v3.png" class="kg-image"><figcaption>Proposed approach to co-robotic exploration that models the interest of the operator over a low bandwidth communication channel and uses the learned reward model to plan the most rewarding (in terms of interest) robot paths.</figcaption></figure><p>We present a novel POMDP problem formulation for a robot that must autonomously decide where</p>]]></description><link>https://warp.whoi.edu/active-reward-learning-for-co-robotic-vision-based-exploration-in-bandwidth-limited-environments/</link><guid isPermaLink="false">5e79ddfcbcddc10754c569da</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[papers]]></category><dc:creator><![CDATA[Stewart Jamieson]]></dc:creator><pubDate>Tue, 24 Mar 2020 10:52:36 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2020/03/overview_v4.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://warp.whoi.edu/content/images/2020/03/overview_v3.png" class="kg-image" alt="Active Reward Learning for Co-Robotic Vision-Based Exploration in Bandwidth Limited Environments"><figcaption>Proposed approach to co-robotic exploration that models the interest of the operator over a low bandwidth communication channel and uses the learned reward model to plan the most rewarding (in terms of interest) robot paths.</figcaption></figure><img src="http://warp.whoi.edu/content/images/2020/03/overview_v4.png" alt="Active Reward Learning for Co-Robotic Vision-Based Exploration in Bandwidth Limited Environments"><p>We present a novel POMDP problem formulation for a robot that must autonomously decide where to go to collect new and scientifically relevant images given a limited ability to communicate with its human operator. From this formulation we derive constraints and design principles for the observation model, reward model, and communication strategy of such a robot, exploring techniques to deal with the very high-dimensional observation space and scarcity of relevant training data. We introduce a novel active reward learning strategy based on making queries to help the robot minimize path ``regret'' online, and evaluate it for suitability in autonomous visual exploration through simulations. We demonstrate that, in some bandwidth-limited environments, this novel regret-based criterion enables the robotic explorer to collect up to 17% more reward per mission than the next-best criterion.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/NH1G8u2hbEU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>Jamieson, S., How, J. P., &amp; Girdhar, Y. (2020). Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments. [To appear in] IEEE International Conference on Robotics and Automation.<br>[<a href="https://arxiv.org/pdf/2003.05016.pdf">PDF</a>]</p>]]></content:encoded></item><item><title><![CDATA[Information-Guided Robotic Maximum Seek-and-Sample in Partially Observable Continuous Environments]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/XVUyL3biX-0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>We present Plume Localization under Uncertainty using Maximum-ValuE information and Search (PLUMES), a planner for localizing and collecting samples at the global maximum of an a priori unknown and partially observable continuous environment. This “maximum seek-and-sample” (MSS) problem is pervasive in the environmental and earth sciences. Experts want to collect</p>]]></description><link>https://warp.whoi.edu/plumes/</link><guid isPermaLink="false">5e0a725fbcddc10754c56969</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[papers]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Mon, 30 Dec 2019 22:05:01 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2019/12/ipp-asv.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/XVUyL3biX-0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><img src="http://warp.whoi.edu/content/images/2019/12/ipp-asv.png" alt="Information-Guided Robotic Maximum Seek-and-Sample in Partially Observable Continuous Environments"><p>We present Plume Localization under Uncertainty using Maximum-ValuE information and Search (PLUMES), a planner for localizing and collecting samples at the global maximum of an a priori unknown and partially observable continuous environment. This “maximum seek-and-sample” (MSS) problem is pervasive in the environmental and earth sciences. Experts want to collect scientifically valuable samples at an environmental maximum (e.g., an oil-spill source), but do not have prior knowledge about the phenomenon's distribution. We formulate the MSS problem as a partially-observable Markov decision process (POMDP) with continuous state and observation spaces, and a sparse reward signal. To solve the MSS POMDP, PLUMES uses an information-theoretic reward heuristic with continuous-observation Monte Carlo Tree Search to efficiently localize and sample from the global maximum. In simulation and field experiments, PLUMES collects more scientifically valuable samples than state-of-the-art planners in a diverse set of environments, with various platforms, sensors, and challenging real-world conditions.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/z9au5VEKS-I?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p><a href="https://doi.org/10.1109/LRA.2019.2929997">https://doi.org/10.1109/LRA.2019.2929997</a></p><!--kg-card-begin: markdown--><p>Bibtex:</p>
<pre><code>@article{Flaspohler2019,
author = {Flaspohler, Genevieve and Preston, Victoria and Michel, Anna Pauline Miranda and Girdhar, Yogesh and Roy, Nicholas},
doi = {10.1109/LRA.2019.2929997},
issn = {2377-3766},
journal = {IEEE Robotics and Automation Letters},
month = {oct},
number = {4},
pages = {3782--3789},
title = {{Information-Guided Robotic Maximum Seek-and-Sample in Partially Observable Continuous Environments}},
url = {https://ieeexplore.ieee.org/document/8767964/},
volume = {4},
year = {2019}}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[ICRA2019: Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited Environments]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/cBoMlSzsY2Q?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-begin: markdown--><h3 id="abstract">Abstract</h3>
<p>This work proposes a bandwidth tunable technique for real-time probabilistic scene modeling and mapping to enable co-robotic exploration in communication constrained environments such as the deep sea. The parameters of the system enable the user to characterize the scene complexity represented by the map, which in turn determines the</p>]]></description><link>https://warp.whoi.edu/streaming-scene-maps-for-co-robotic-exploration/</link><guid isPermaLink="false">5dc0712ba7560c6c943dc78c</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[papers]]></category><category><![CDATA[video]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Mon, 03 Jun 2019 19:57:38 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2019/06/tank_map-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/cBoMlSzsY2Q?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-begin: markdown--><h3 id="abstract">Abstract</h3>
<img src="http://warp.whoi.edu/content/images/2019/06/tank_map-1.png" alt="ICRA2019: Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited Environments"><p>This work proposes a bandwidth tunable technique for real-time probabilistic scene modeling and mapping to enable co-robotic exploration in communication constrained environments such as the deep sea. The parameters of the system enable the user to characterize the scene complexity represented by the map, which in turn determines the bandwidth requirements. The approach is demonstrated using an underwater robot that learns an unsupervised scene model of the environment and then uses this scene model to communicate the spatial distribution of various high-level semantic scene constructs to a human operator. Preliminary experiments in an artificially constructed tank environment, as well as simulated missions over a 10m×10m coral reef using real data, show the tunability of the maps to different bandwidth constraints and science interests. To our knowledge, this is the first paper to quantify how the free parameters of the unsupervised scene model impact both the scientific utility of and bandwidth required to communicate the resulting scene model.</p>
<!--kg-card-end: markdown--><p><a href="https://doi.org/10.1109/ICRA.2019.8794132">https://doi.org/10.1109/ICRA.2019.8794132</a></p><p><a href="https://arxiv.org/pdf/1903.03214.pdf">https://arxiv.org/pdf/1903.03214.pdf</a></p>]]></content:encoded></item><item><title><![CDATA[Barbados 2019 Field Trials]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="http://warp.whoi.edu/content/images/2019/01/bellairs_group.JPG" alt="Barbados Sea Trials 2019 Group Photo"></p>
<p>We just successfully completed our 2019 robot field trials at the <a href="https://www.mcgill.ca/bellairs/">Bellairs Research Institute</a> 2019 Sea Trials in Holetown, Barbados. The main goals of the trials were to test our new ASV and AUV co-operating robot system and use them to demonstrate and evaluate our scene characterization and hotspot detection</p>]]></description><link>https://warp.whoi.edu/barbados-2019/</link><guid isPermaLink="false">5dc0712ba7560c6c943dc78a</guid><category><![CDATA[Co-Robotic Exploration]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Mon, 21 Jan 2019 20:28:49 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2019/01/RedBrov-barbados.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="http://warp.whoi.edu/content/images/2019/01/RedBrov-barbados.jpg" alt="Barbados 2019 Field Trials"><p><img src="http://warp.whoi.edu/content/images/2019/01/bellairs_group.JPG" alt="Barbados 2019 Field Trials"></p>
<p>We just successfully completed our 2019 robot field trials at the <a href="https://www.mcgill.ca/bellairs/">Bellairs Research Institute</a> 2019 Sea Trials in Holetown, Barbados. The main goals of the trials were to test our new ASV and AUV co-operating robot system and use them to demonstrate and evaluate our scene characterization and hotspot detection algorithms.</p>
<p>Here are a few select photos from the trip:</p>
<p><a href="https://photos.app.goo.gl/f75Ss9WFzNzvMzNt8">https://photos.app.goo.gl/f75Ss9WFzNzvMzNt8</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[IROS2018: Approximate Distributed Spatiotemporal Topic Models for Multi-Robot Terrain Characterization.]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Our <a href="http://web.mit.edu/kdoherty/www/docs/kdoherty_iros2018.pdf">paper</a> on enabling distributed learning in bandwidth limited environments was one of the finalists<br>
for the best paper award at IROS 2018 (6 finalists among 1,254 accepted).</p>
<p>Abstract:<br>
Unsupervised learning techniques, such as Bayesian topic models, are capable of discovering latent structure directly from raw data. These unsupervised</p>]]></description><link>https://warp.whoi.edu/iros2018/</link><guid isPermaLink="false">5dc0712ba7560c6c943dc78d</guid><category><![CDATA[Co-Robotic Exploration]]></category><category><![CDATA[papers]]></category><dc:creator><![CDATA[Yogi Girdhar]]></dc:creator><pubDate>Sat, 03 Nov 2018 20:21:00 GMT</pubDate><media:content url="http://warp.whoi.edu/content/images/2019/06/co-multi-robots-2.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="http://warp.whoi.edu/content/images/2019/06/co-multi-robots-2.png" alt="IROS2018: Approximate Distributed Spatiotemporal Topic Models for Multi-Robot Terrain Characterization."><p>Our <a href="http://web.mit.edu/kdoherty/www/docs/kdoherty_iros2018.pdf">paper</a> on enabling distributed learning in bandwidth limited environments was one of the finalists<br>
for the best paper award at IROS 2018 (6 finalists among 1,254 accepted).</p>
<p>Abstract:<br>
Unsupervised learning techniques, such as Bayesian topic models, are capable of discovering latent structure directly from raw data. These unsupervised models can endow robots with the ability to learn from their observations without human supervision, and then use the learned models for tasks such as autonomous exploration, adaptive sampling, or surveillance. This paper extends single-robot topic models to the domain of multiple robots. The main difficulty of this extension lies in achieving and maintaining global consensus among the unsupervised models learned locally by each robot. This is especially challenging for multi-robot teams operating in communication-constrained environments, such as marine robots.</p>
<p>This paper presents a novel approach for multi-robot distributed learning in which each robot maintains a local topic model to categorize its observations and model parameters are shared to achieve global consensus. We apply a combinatorial optimization procedure that combines local robot topic distributions into a globally consistent model based on topic similarity, which we find mitigates topic drift when compared to a baseline approach that matches topics naively. We evaluate our methods experimentally by demonstrating multi-robot underwater terrain characterization using simulated missions on real seabed imagery. Our proposed method achieves similar model quality under bandwidth-constraints to that achieved by models that continuously communicate, despite requiring less than one percent of the data transmission needed for continuous communication.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>