Distributed Modeling and Anomaly Detection in Mobile and Sensor Networks
In this project, we are developed algorithms for distributed sensor data
modeling and anomaly detection in systems of sensor networks and autonomous mobile
robots using machine learning techniques. Data modeling in distributed systems is
critically important for determining the "normal" operation mode of the system. Being
able to model the expected sensor signatures for typical operations greatly simplifies the
human designer's job in the practical application of new sensor and mobile robot
systems; by enabling the system to autonomously characterize the expected sensor data
streams, the system can learn the important features of its environment to monitor. This,
in turn, allows the system to perform autonomous anomaly detection to recognize when
unexpected sensor signals are detected. This type of distributed sensor modeling and
anomaly detection can be used in a wide variety of sensor network and mobile robot
applications, such as detecting the presence of intruders, detecting sensor failures,
detecting anomalous human motion patterns, detecting unexpected chemical signatures,
and so forth. The advantage of this approach is that the human designer does not have to
characterize the anomalous signatures in advance. Instead, the system of sensor nodes
and mobile robots can learn this characterization autonomously, for quick application to
new domains. Our work focuses on the research aspect of these issues.
The techniques being developed employ
distributed statistical machine learning approaches for sensor data modeling and anomaly
detection. The algorithms are being designed, developed, evaluated, and validated
experimentally in laboratory application scenarios, such as
intruder detection, or environmental coverage and
Adaptive Causal Models for Fault Diagnosis and Recovery in Multi-Robot Teams (LeaF)
Instead, an adaptive method is needed to enable the robot
team to use its experience to update and extend its causal model to enable the
team, over time, to better recover from faults when they occur.
We are developing a case-based learning approach, called LeaF (for Learning-based
Fault diagnosis), that enables robot team members to
adapt their causal models, enabling them to learn from faults encountered during
their mission, and to improve their ability to diagnose and recover from these
faults over time.
As part of this project, we also explored the development of metrics for measuring
fault tolerance in multi-robot teams.
We developed learning techniques for automatically adapting causal models
for fault diagnosis and recovery
in complex multi-robot teams. We believe that a causal model approach is effective
for anticipating and recovering from many types of robot team errors.
We have implemented one of the first full applications of causal models to
a complex multi-robot team. However,
because of the significant number of possible failure modes in a complex
multi-robot application, and the difficulty in anticipating all possible
failures in advance, our empirical results show that
one cannot guarantee the generation of a complete a priori
causal model that identifies and specifies all faults that may occur in the system.
Constructivist Learning using ASyMTRe in Multi-Robot Teams
Constructivist learning is the process of actively learning new skills based on
This research project extends the state of the art for constructivist learning in
multi-robot teams by developing new, computationally efficient techniques that allow robot team
members to continually improve their skills over time. Current approaches to constructivist
learning in robotics find correlations between existing low-level robot actions and a desired
behavior. However, because these existing approaches begin with such a low level of action
abstraction, they are extremely computationally intensive. Our new constructivist learning
approach begins at a higher level of abstraction -- the sensori-motor schema -- which should
enable much more computationally efficient learning. Our approach builds upon our prior work,
called ASyMTRe, that forms multi-robot coalitions by automatically combining sensori-motor
schema building blocks to solve the task at hand. This work adds an important
learning component allowing robot team members to continually improve their skills by
"chunking" existing low-level schema building blocks into efficient higher-level task solutions.
This new approach will provide important new lifelong learning capabilities to multi-robot
teams, thus significantly facilitating their use in real-world applications, such as search and
rescue, security, mining, hazardous waste cleanup, industrial and household maintenance,
automated manufacturing, and construction. We also intend to show that the proposed
techniques are applicable to other types of robotic systems, including humanoid and service
robots, and thus have a broader impact on the robotics field as a whole.
Coalescent Multi-Robot Teaming through Automated Task Solution Synthesis (ASyMTRe)
This project dealt with the development of a methodology for automatically synthesizing task
solutions for heterogeneous multi-robot teams.
In contrast to prior approaches that require a manual pre-definition of
how the robot team will accomplish its task (while perhaps automating
who performs which task), our approach automates
both the how and the who to generate task solution approaches
that were not explicitly defined by the designer a priori.
Our approach, which we call ASyMTRe,
is inspired by the theory of information invariants. ASyMTRe is
based on mapping environmental, perceptual, and motor control
schemas to the required flow of information
through the multi-robot system, automatically
reconfiguring the connections of schemas within and across robots to synthesize valid and
efficient multi-robot behaviors for accomplishing the team objectives.
We have developed a centralized anytime ASyMTRe configuration algorithm, proving
that the configuration algorithm is correct, and formally addressing issues of completeness
and optimality. Additionally, we have developed a distributed version of ASyMTRe, called
ASyMTRe-D, which uses multi-robot communication to enable distributed teaming.
We have validated the centralized approach by
applying the ASyMTRe methodology to two different teaming scenarios:
multi-robot transportation, and
multi-robot box pushing. We are validating the distributed ASyMTRe-D
implementation in the multi-robot
transportation task, and are showing how this version is a tradeoff between
solution quality and robustness compared to the centralized approach.
The advantages of this new approach to coalescent teaming
are that it: (1) enables the robot team to
synthesize new task solutions that use fundamentally different combinations
of robot behaviors for different team compositions, and
(2) provides a general mechanism for sharing sensory information across
networked robots, so that more capable
robots can assist less capable robots in accomplishing their objectives.
Autonomous Assistive Navigation in Heterogeneous Robot Teams
This research is aimed at the development of autonomous behaviors
for tightly-coupled cooperation in heterogeneous robot teams, specifically
for the task of navigation assistance. These cooperative behaviors
enable capable, sensor-rich "leader"
robots to assist in the navigation of sensor-limited (``simple'')
robots that have no onboard capabilities for obstacle avoidance or
localization, and only minimal capabilities for kin recognition.
Because of the navigation limitations of the resource-bounded robots,
they are unable to autonomously disperse themselves or
move to planned sensor deployment positions independently.
To address this challenge, we have developed
cooperative behaviors for heterogeneous robots that enable the
successful deployment of sensor-limited robots by assistance from
more capable leader robots. These heterogeneous
cooperative behaviors are quite complex, and
involve the combination of
several behavior components,
including vision-based marker detection,
autonomous teleoperation, color marker following in robot chains, laser-based
localization, map-based path planning, and ad hoc mobile networking.
We have validated our approach in extensive testing on the physical robots.
To our knowledge, this is the most complex heterogeneous robot team cooperative
task ever attempted on physical robots.
We consider it a significant
success to have achieved such a high degree of system effectiveness, given the
complexity of the overall heterogeneous system.
Distributed Mobile Acoustic Sensor Networks
This research is focused on the development and deployment
of distributed mobile acoustic sensor networks.
The approach assumes the indoor environment has been previously mapped and that
the sensor nodes know their position in the map. The targets are localized in
the sensor network based upon local maxima of the acoustic volume. The
localization information is reported to an Interceptor robot, which
utilizes a dual wavefront path planner to move from its current location to a
location that is within visibility range of a target. This approach
has been rigorously tested on physical robots.
To our knowledge, this is the first implementation of a
multi-robot system that combines the use of an acoustic sensor net
for target detection with an
Interceptor robot that can efficiently
reach the moving position of the detected target in indoor environments.
Heterogeneous Swarm Robotics for Search Applications
The goal of this DARPA
project was to demonstrate large numbers (100+) of physical heterogeneous robots cooperating
to solve indoor search applications. This project was a joint effort between
Science Applications International
Corporation (SAIC), The University of Tennessee,
Telcordia Technologies, and the
University of Southern California. This project has
developed and utilized a number of novel collaborative control algorithms to enable this
robot team to explore an unknown building (one floor), find objects of interest, and
"protect" the objects of interest over a 24-hour period, autonomously returning for
battery recharging when necessary. All robot actions are highly autonomous and
are monitored by a human operator using a sophisticated user interface at the building entrance.
Last updated: November 2010