English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H., Black, M., et al. (2019). Active Perception based Formation Control for Multiple Aerial Vehicles. IEEE Robotics and Automation Letters, 4(4), 4491-4498. doi:10.1109/LRA.2019.2932570.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0002-E14B-C Version Permalink: http://hdl.handle.net/21.11116/0000-0005-D4E4-A
Genre: Journal Article

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Tallamraju, R1, Author
Price, E1, Author
Ludwig, R1, Author
Karlapalem, K, Author
Bülthoff, HH2, 3, Author              
Black, MJ1, Author              
Ahmad, A1, Author              
Affiliations:
1Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497642              
2Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
3Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Autonomous motion capture (mocap) systems for outdoor scenarios involving flying or mobile cameras rely on i) a robotic front-end to track and follow a human subject in real-time while he/she performs physical activities, and ii) an algorithmic back-end that estimates full body human pose and shape from the saved videos. In this paper we present a novel front-end for our aerial mocap system that consists of multiple micro aerial vehicles (MAVs) with only on-board cameras and computation. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple MAVs. However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can now actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking as a convex quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive it using Gaussian observation model assumptions within the CDT algorithm. We also show how we embed all the non-convex constraints, including those for dynamic and static obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented (video link : this https URL). Extensive simulation results demonstrate the scalability and robustness of our approach. ROS-based source code is also provided.

Details

show
hide
Language(s):
 Dates: 2019-012019-10
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1109/LRA.2019.2932570
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: IEEE Robotics and Automation Letters
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York, NY : IEEE
Pages: - Volume / Issue: 4 (4) Sequence Number: - Start / End Page: 4491 - 4498 Identifier: ISSN: 2377-3766
CoNE: https://pure.mpg.de/cone/journals/resource/23773766