IEEE Robotics and Automation Society IEEE

Texas A&M Robotics Symposium

Event Home Event Program Event Logistics

Program

Talks will be held in room 2005 of the Emerging Technologies Building (ETB).


Wednesday, Jan 21

4:10-5:10 Cyber-Physical Systems with Human in the Loop
Ruzena Bajcsy, University of California, Berkeley
5:15-6:00 Reception, ETB Atrium

Thursday, Jan 22

8:30 Registration & Coffee
8:55-9:00 Welcome and Opening Remarks
Nancy Amato, Texas A&M University
9:00-10:20 Session 1, Multi-Agent Systems
Chair: Sam Rodriguez, Texas A&M University
New challenges in multi-robot task allocation
Maria Gini, University of Minnesota
Multi-robot Cooperative Tracking of Dynamic Ocean Plume
Yi Guo, Stevens Institute of Technology
Taming the swarm
Radhika Nagpal, Harvard University
Human Supervisory Control of Robotic Swarms
Katia Sycara, Carnegie Mellon University
10:20-10:40 Break
10:40-12:00 Session 2, Medical Robotics
Chair: Shawna Thomas, Texas A&M University
Medical robotics, dealing with human-robot interaction
Alicia Casals, Technical University of Catalonia
Robot-Assisted Therapy for Children with Cerebral Palsy
Ayanna Howard, Georgia Institute of Technology
Simulation-based Joint Estimation of Body Deformation and Elasticity Parameters for Healthcare Robotics
Ming Lin, University of North Carolina at Chapel Hill
iDental: A Simulator for Dental Skill Training
Yuru Zhang, Beihang University
12:00-1:30 Lunch
1:30-2:30 Session 3, Robotics Today
Chair: Dezhen Song, Texas A&M University
Haptics: Engineering Touch
Allison Okamura, Stanford University
Improving the performance of teleoperated ground robots with communication delays
Dawn Tilbury, University of Michigan
Autonomous Robotic Manipulation
Jing Xiao, University of North Carolina at Charlotte
2:30-2:50 Break
2:50-3:50 Session 4, Robots in the Real World
Chair: Marco Morales, Instituto Tecnológico Autónomo de México (ITAM)
Innovations in Human-Technology Interaction for Blind and Visually Impaired People
Bernardine Dias, Carnegie Mellon University
Semantic Parsing for Robot Perception
Jana Kosecka, George Mason University
Metrics for Unmanned Ground and Aerial Vehicles in Unstructured Environments
Robin Murphy, Texas A&M University
3:50-4:10 Break
4:10-5:10 Session 5, Human Robot Cooperation
Chair: Dylan Shell, Texas A&M University
Towards Peer-to-Peer Human-Robot Teaming
Lynne Parker, University of Tennessee
Teaching robots to help humans with clothing
Carme Torras, Institut de Robòtica i Informàtica Industrial (CSIC-UPC)
Symbiotic Autonomous Mobile Service Robots
Manuela Veloso, Carnegie Mellon University

Abstracts and Biographies

Ruzena Bajcsy

University of California, Berkeley
Cyber-Physical Systems with Human in the Loop

Abstract: We are interested in the Dynamic Interaction of Physical systems and Humans. The approach we are taking is of modeling kinematics and dynamics of the human activity as they interact with the semi-autonomous systems. We are asking the questions when should be the human in control and when is appropriate to leave the control to the autonomous system, in other words how should they cooperate. We use motion capture, stereo vision and body sensors (accelerometers, EMG) as observables for modeling the human activity. Since humans are a complex kinematic system we take advantage of finding the most informative joints for given activity. In addition to these most informative joints we also are taking into consideration the velocity vectors at each joint as well as acceleration, strength of muscle and torque forces for identifying the most informative moving joints. This methodology enables us to segment the human activity in a natural non-ad hoc way and model these movement primitives as linear continues systems/modes which are then connected by switching mechanisms into a hybrid system for given activity and/or interaction.

The theoretical findings will be documented by two applications: one modeling the driver and the car and the other observing the elderly exercising and the coach (automated or human) intervening as needed.

Biography: Dr. Ruzena Bajcsy (“buy chee”) was appointed Director of CITRIS and professor of EECS department at the University of California, Berkeley on November 1, 2001. Prior to coming to Berkeley, she was Assistant Director of the Computer Information Science and Engineering Directorate (CISE) between December 1, 1998 and September 1, 2001.  As head of National Science Foundation’s CISE directorate, Dr. Bajcsy managed a $500 million annual budget.  She came to the NSF from the University of Pennsylvania where she was a professor of computer science and engineering since 1972. In 2004 she became a CITRIS director emeritus and now she is a full time NEC Distinguished professor of EECS.

Dr. Bajcsy is a pioneering researcher in machine perception, robotics and artificial intelligence.  She is a NEC Distinguished professor in the  Electrical Engineering and Computer  Science Department at Berkeley.   She was also Director of the University of Pennsylvania’s General Robotics and Active Sensory Perception Laboratory, which she founded in 1978.

Dr. Bajcsy has done seminal research in the areas of human-centered computer control, cognitive science, robotics, computerized radiological/medical image processing and artificial vision.  She is highly regarded, not only for her significant research contributions, but also for her leadership in the creation of a world-class robotics laboratory, recognized world wide as a premiere research center.  She is a member of the National Academy of Engineering, as well as the Institute of Medicine. She is a recipient of Franklin Medal 2009 and a member of the American Philosophical Society, established by Benjamin Franklin since 2005. She is also a member of the American Academy of Arts and Sciences since 1998. She is especially known for her wide-ranging, broad outlook in the field and her cross-disciplinary talent and leadership in successfully bridging such diverse areas as robotics and artificial intelligence, engineering and cognitive science.

Dr. Bajcsy received her master’s and Ph.D. degrees in electrical engineering from Slovak Technical University in 1957 and 1967, respectively.  She received a Ph.D. in computer science in 1972 from Stanford University, and since that time has been teaching and doing research at Penn’s Department of Computer and Information Science.  She began as an assistant professor and within 13 years became chair of the department.  Prior to her work at the University of Pennsylvania, she taught during the 1950s and 1960s as an instructor and assistant professor in the Department of Mathematics and Department of Computer Science at Slovak Technical University in Bratislava.  She has served as advisor to more than 50 Ph.D. recipients.  In 2001 she received an honorary doctorate from University of Ljubljana in Slovenia and Lehigh University. In 2001 she became a recipient of the ACM A. Newell award. In 2012 she received an honorary degree from University of Pennsylvania in Philadelphia and honorary degree from Technical University in Stockholm (KTH).

Back to top


Alicia Casals

Tecnhical University of Catalonia
Medical robotics, dealing with human-robot interaction

Abstract: One of the challenges in medical robotics in general, and in rehabilitation and surgery in particular, is the need to continuously adapt to the user. This adaptation is not limited to adjust a position in space, as in industrial robotics, but to adjust position, shape and time, and even adapt to the patient’s state. Adaptation in position is necessary because the user’s position, or a part of it, cannot be predefined with enough precision so as to be able to explicitly preprogram a trajectory. The requirement of adapting to the user’s shape is due to the differences in anatomy, although they can be known in advance via 3D imaging; and time adjustments are required since anatomic forms can be non-static and temporarily change due to volitional, reflex or physiological movements. The robotic system should also adapt to the physiological state of the user. In this context, our research is mainly oriented to surgery and rehabilitation. In surgery providing assistance to teleoperated systems that allow robotizing minimally invasive surgical procedures, which due to their complexity cannot still be solved with autonomous robots. One such aiding system is the visualization of the anatomic working space within the robot working space, so as to allow foreseeing and avoiding possible dysfunctions that could occur along an intervention, a second aid is the compensation of the beating heart movement from image analysis. In rehabilitation we work on the continuous adaptation of the joint impedances as a function of the state of the user and the sequence of the movement.

Biography: Alicia Casals. Her background is in Electrical and Electronic Engineering and PhD in Computer Vision. She is professor at the Technical University of Catalonia (UPC), in the Automatic Control and Computer Engineering Department.She is currently leading the research group on Robotics of the Institute for Biomedical Engineering of Catalonia (IBEC). Her research field is in robotic systems and control strategies for rehabilitation, assistance and surgical applications. Among other responsibilities she has been coordinator of the Education and Training key area within Euron, European Robotics Network, Vice President for Membership of IEEE-RAS and at present Vicepresident of the EMBS Technical Committee on Biorobotics. From 2007 Prof. Casals is member of the /Institut d’Estudis Catalans/, the Academy of Catalonia.

Back to top


Bernardine Dias

Carnegie Mellon University
Innovations in Human-Technology Interaction for Blind and Visually Impaired People

Abstract: Since 2004 the TechBridgeWorld research group has been exploring different avenues through which technology can be made more relevant and accessible to underserved communities around the world. In this talk I will share a progression of our work on human-technology interaction for blind and visually impaired people. Our focus in this context has been on technology that enhances education and urban navigation for visually impaired people, and our work in this area has spanned both the developing and developed world.

Among the projects I will discuss are our Braille Writing Tutor project and our NavPal project. The Braille Writing Tutor project explores opportunities for computer technology to enhance the process of learning to write braille with a slate and stylus; the common methodology in the developing world. We have been exploring a variety of avenues to make this technology both accessible and relevant to blind students learning to write braille with a slate and stylus. Outcomes of this project have been field tested in partnership with visually impaired communities in the USA, India, Bangladesh, Tanzania, and Zambia. The NavPal project explores opportunities for technology to enhance the independence and safety of visually impaired adults as they navigate urban environments in the USA. Through this work we have been investigating the different ways in which computing can play a role in assisting blind travelers with pre-planning trips, familiarizing themselves with environments, and providing dynamic guidance through a variety of transit modes. I will also briefly summarize our ongoing and future work on assistive robots for blind travelers.

Biography: M. Bernardine Dias is an Associate Research Professor at the Robotics Institute at Carnegie Mellon University, affiliated with the Field Robotics Center. Her research focuses on culturally appropriate technology that is accessible and relevant to underserved communities. Towards this end she founded and directs the TechBridgeWorld research group to enable technology research in partnership with underserved communities throughout the world. Primarily, TechBridgeWorld focuses on assistive technology for visually impaired people, and educational technology for low-literacy communities. Within this context, Dias’ work has recently focused on effective human-technology interaction methods for people who are blind or visually impaired.

Dias earned her B.A. from Hamilton College, Clinton NY, with a dual concentration in Physics and Computer Science and a minor in Women’s Studies in 1998, followed by a M.S. (2000) and Ph.D. (2004) in Robotics from Carnegie Mellon University. As a result of her dissertation work in market-based coordination of robot teams, Dias also continues to be a recognized researcher in autonomous team coordination. In addition to her research activities, Dias actively encourages women in science and technology, and is a founding member of the women@SCS group at Carnegie Mellon University.

Back to top


Maria Gini

University of Minnesota
New challenges in multi-robot task allocation

Abstract: In this talk we will focus on new aspects of the ubiquitous task allocation problem for multiple robots. Specifically: (1) allocation of tasks that have temporal constraints, which are expressed as time windows within which a task must be executed. Temporal constraints create dependencies among tasks, adding complexity to the allocation. We propose distributed methods for both off-line and online allocations. (2) allocation of tasks that have a cost that grows over time. An example is the growth of fires. By modeling the growth of the costs over time as a recurrence relation we can estimate the effect on the growth of the work done by the agents and decide where agents should be allocated to minimize the damage. We address the problem both with a static allocation at start and with a dynamic allocation that changes during execution.

Biography: Maria Gini is a Professor in the Department of Computer Science and Engineering at the University of Minnesota. She specializes in robotics and Artificial Intelligence. Specifically she studies decision making for autonomous agents in a variety of applications and contexts, ranging from distributed methods for task allocation, robot exploration, and teamwork. She also works on agent-based economic predictions for supply-chain management, for which she won the 2012 INFORMS Design Science Award for with her Ph.D. student Wolf Ketter and colleagues. She is a Fellow of AAAI, a Distinguished Professor of the College of Science and Engineering at the University of Minnesota, and the winner of numerous University awards.

Back to top


Yi Guo

Stevens Institute of Technology
Multi-robot Cooperative Tracking of Dynamic Ocean Plume

Abstract: The recent deep water horizon oil spill has posed great challenges to both robotics and ocean engineering communities. The challenges motivate us to consider utilizing advanced robotic techniques to monitor and track the propagation of oil plumes. I will present our recent research advances in distributed tracking of dynamic ocean plume by multi-robotic systems. Different from existing work on static level curve tracking that replies purely on gradient information, the transport model of pollution source is explicitly considered using the advection-diffusion model. An estimation and control framework is proposed, and the robots communicate in a nearest-neighbor topology to cooperatively track and patrol along the plume propagating front. Simulation results based on a robotic simulator, the Field Robotics Lab Vehicle Software, will be shown, which uses Lightweight Communication and Marshalling for easy transit to field testing.

Biography: Dr. Yi Guo is an Associate Professor in the Department of Electrical and Computer Engineering at Stevens Institute of Technology, where she joined in 2005 as an Assistant Professor. She obtained her Ph.D. degree in Electrical and Information Engineering from University of Sydney, Australia, in 1999. She was a postdoctoral research fellow at Oak Ridge National Laboratory from 2000 to 2002, and a visiting Assistant Professor at University of Central Florida from 2002 to 2005. Her research interests are mainly in autonomous mobile robotics, nonlinear systems and control, and control of multi-scale complex systems. Dr. Guo directs Robotics and Automation Laboratory at Stevens, and her research has been supported by US National Science Foundation, US ARMY, and US DoD. She is a member of the editorial board of IEEE Robotics and Automation Magazine. Dr. Guo is a Senior Member of the IEEE.

Back to top


Ayanna Howard

Georgia Institute of Technology
Robot-Assisted Therapy for Children with Cerebral Palsy

Abstract: Robots for therapy applications can increase the quality of life for children who experience disabling circumstances, by, for example, becoming therapeutic playmates for children with neurological disorders. There are numerous challenges though that must be addressed - determining the roles and responsibilities of both clinician, child, and robot; developing interfaces for clinicians to interact with robots that does not require extensive training; and developing methods to allow the robot to learn from their child counterpart. Applying such human-interaction methodologies enables a new era of progress in robot-assisted therapy applications for children with disabilities. In this talk, I will discuss the domain of intelligent robots and supporting assistive technologies for therapy applications. I will present our approaches in which these technologies can address real-life needs for both improving quality of life as well as tackling rehabilitation and therapy objectives for children with Cerebral Palsy.

Biography: Ayanna Howard is the Motorola Foundation Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. She received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California in 1999. Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in over 180 peer-reviewed publications in a number of projects – from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator of 2003, recognized as NSBE Educator of the Year in 2009, and receiving the Georgia-Tech Outstanding Interdisciplinary Activities Award in 2013. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research lab and has released their first suite of educational technology products. From 1993-2005, Dr. Howard was at NASA's Jet Propulsion Laboratory, California Institute of Technology. Following this, she joined Georgia Tech in July 2005 and founded the Human-Automation Systems Lab. She is currently the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines. Prior to that, she served as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech for three years from 2010-2013.

Back to top


Jana Kosecka

George Mason University
Semantic Parsing for Robot Perception

Abstract: Advancements in robotic navigation, mapping, object search and recognition rest to a large extent on robust, efficient and scalable semantic understanding of the surrounding environment. Since the choice of most relevant semantic information depends on the task, it is desirable to develop approaches which can be adopted for different tasks at hand and which separate the aspects related to surroundings from object entities. I will present an efficient approach for predicting locations of generic objects in indoors and outdoors environments from videos acquired by a moving vehicle in the Conditional Random Field framework and infer semantic categories of ground, structure, furniture and props categories in indoors or ground, sky, building, vegetation and objects in outdoors setting. The proposed approach naturally lends itself to on-line recursive belief updates, can handle multiple sensing modalities (range and images) with possibly non-overlapping fields of view. Given the obtained coarse semantic understanding of the scene I will then discuss how to refine the coarse semantic categories by learning a set of efficient binary segmentations for finer categorization of the classes of interest. Evaluation on publicly available benchmark datasets will be presented.

Biography: Jana Kosecka is an Associate Professor at the Department of Computer Science, George Mason University. She obtained her M.S.E. in Electrical Engineering and Computer Science from Slovak Technical University and Ph.D. in Computer Science from University of Pennsylvania. She was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr's prize (with Y. Ma, S. Soatto and S. Sastry) and received the National Science Foundation CAREER Award. Jana is former Associate Editor of IEEE Transactions on Robotics and a Member of the Editorial Board of International Journal of Computer Vision. Her general research interests are in Robotics and Computer Vision. In particular she is interested 'seeing' systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

Back to top


Ming Lin

University of North Carolina at Chapel Hill
Simulation-based Joint Estimation of Body Deformation and Elasticity Parameters for Healthcare Robotics

Abstract: Material property has great importance in healthcare robotics and surgical simulation. The elasticity parameters, such as Young's modulus of the human soft tissue, are important to characterize the tissue deformation of each patient. The elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures or image-guided biopsy, as well as proactive health monitoring of at-risk patients.

Previous elasticity parameters estimation methods rely largely on high-resolution elastography medical images and application of known external forces; or they are primarily limited to recover one elasticity parameter of one type of tissue at a time. In this talk, I present a biomechanically accurate algorithm to determine patient-specific tissue elastic parameters using physics-based simulation and image analysis. I will describe a novel method to simultaneously estimate tissue elasticity parameters and deformation fields based on pairs of images, using a finite-element based biomechanical model derived from an initial set of images, local displacements implied by image cues, and an optimization-based framework.

To accelerate the optimization process, I also introduce a dimension-reduction technique to allow a trade-off between the computational efficiency and desired accuracy. The reduced model is constructed using statistical training with a set of example deformations. We show the computational framework applied to computer animations of elastic bodies and 3D elastography. Our study also suggests that tissue (i.e. prostate) elasticity is correlated with the aggressiveness of prostate cancer, useful for health monitoring.

Biography: Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill and an Honorary Visiting Professor at Tsinghua University in Beijing, China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and eight best paper awards at international conferences. She is a Fellow of ACM and IEEE.

Her research interests include physically-based modeling, robotics, virtual environments, sound rendering, haptics, and geometric computing. She has (co-) authored more than 250 refereed publications in these areas and co-edited/authored four books. She has served on over 130 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is the former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, a member of 6 editorial boards, and a guest editor for over a dozen of scientific journals and technical magazines. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.

Back to top


Robin Murphy

Texas A&M University
Metrics for Unmanned Ground and Aerial Vehicles in Unstructured Environments

Abstract: Unmanned ground and aerial vehicles have been successfully used in over forty disasters worldwide. Disasters, especially damaged buildings such as at the 9/11 World Trade Center and the Fukushima Daiichi nuclear accident, present environmental impediments to navigation and general mobility. However, because these environments are unstructured, they are difficult to formally characterize and thus to compare performance between platforms or to simulate meaningful test configurations. This talk will present a set of environmental metrics for the scale and traversability of the region of interest. Two novel dimensionless numbers that are independent of the size of the vehicle or the interior will be described. The relative scale of the deconstruction, conceptually similar to metrics for fluid flow, captures the intrinsic maneuverability of the vehicle for a region. Tortuosity, from fractal theory and behavioral science, rates the severity of the frequency and location of obstacles in terms of its impact on the navigational path of a platform. Examples of these metrics will be drawn from disasters and testbeds at Disaster City®.

Biography: Robin Roberson Murphy (IEEE Fellow) is the Raytheon Professor of Computer Science and Engineering at Texas A&M, Director of the Center for Robot-Assisted Search and Rescue and the Center for Emergency Informatics. She received a B.M.E. in mechanical engineering, a M.S. and Ph.D in computer science in 1980, 1989, and 1992, respectively, from Georgia Institute of Technology. She has over 150 publications on artificial intelligence, human-robot interaction, and robotics including the Introduction to AI Robotics and Disaster Robotics. Her insertion of tactical ground, air, and marine robots at 16 disasters includes the 9/11 World Trade Center disaster, Hurricane Katrina, Fukushima, and the Washington mudslides. Dr. Murphy has served on the IEEE Robotics and Automation Society Administrative and Executive Committees and co-founded the Technical Committee on Safety Security and Rescue Robotics and its annual conference. She serves on several government and professional boards, most recently the Defense Science Board.

Back to top


Radhika Nagpal

Harvard University
Taming the Swarm

Abstract: Biological systems, from cells to social insects, get tremendous mileage from the cooperation of vast numbers of cheap, unreliable, and limited individuals. What would it take to create our own artificial collectives of the scale and complexity that nature achieves? In this talk, I will discuss one of our recent and ongoing endeavors - the Kilobot project - a 1024 ("kilo") robot swarm testbed for studying collective intelligence.

Biography: Radhika Nagpal is the Kavli Professor of Computer Science at Harvard University and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. She leads the Self-organizing Systems Research group and her research interests span computer science, robotics, and biology.

Back to top


Allison Okamura

Stanford University
Haptics: Engineering Touch

Abstract: The sense of touch is essential for humans to control their bodies and interact with the surrounding world. Yet there are many scenarios in which the sense of touch is typically lost: when a surgeon teleoperates a robot to perform minimally invasive surgery, when an amputee uses a prosthetic arm, and when a student performs virtual laboratory exercises in an online class. Haptic technology combines robotics, design, psychology, and neuroscience to artificially generate touch sensations in humans. This talk will describe how haptic technology works and how it is being applied to improve human health, education, and quality of life.

Biography: Allison Okamura is an associate professor of mechanical engineering (and of computer science, by courtesy) at Stanford University. She develops haptic (sense of touch) technology for use in novel applications such as robot-assisted surgery, prosthetics, rehabilitation, teleoperation, and education. She is committed to sharing her passion for research and discovery, using robotics and haptics in innovative outreach programs to groups underrepresented in engineering. Her awards include the National Science Foundation CAREER Award, the Robotics and Automation Society Early Academic Career Award, and the Technical Committee on Haptics Early Career Award. She is an IEEE Fellow.

Back to top


Lynne Parker

University of Tennessee
Towards Peer-to-Peer Human-Robot Teaming

Abstract: This talk will describe our research in the creation of peer-to-peer teams of humans and robots. In these teams, humans and robots operate side-by-side in the same physical space, with each human and robot performing physical actions based upon their own skills and capabilities. The intent is to generate an interaction style that is not based on direct commands and controls from humans to robots, but rather on the idea that robots can implicitly infer the intent of human teammates through passive observation, and then take appropriate actions in the current context. In this interaction, humans perform tasks in a very natural manner, as they would when working with a human teammate, thus bypassing the difficulty of cognitive overload that occurs when humans are required to explicitly supervise the actions of several robot team members. This research focuses on two key challenges: (1) how robots can determine humans’ current goals, intents, and activities via sensor observation only, and (2) how robots can respond appropriately to help humans with the ongoing task, consistent with the inferred human intent. This talk will describe our progress to date in achieving peer-to-peer teaming.

Biography: In January 2015, Dr. Lynne Parker began serving as the Division Director for Information and Intelligent Systems in the Computer and Information Science and Engineering Directorate at the National Science Foundation. She joins NSF from the University of Tennessee, Knoxville (UTK), where she is a Professor in the Department of Electrical Engineering and Computer Science, and a former Associate Head. While at NSF, she continues her research part time at UTK, in the areas of distributed robotics, human-robot interaction, sensor networks, and machine learning. She also previously worked for several years as a Distinguished Research and Development Staff Member at Oak Ridge National Laboratory. She is serving as the General Chair for the 2015 IEEE International Conference on Robotics and Automation, and has served as the Editor-in-Chief of the IEEE Robotics and Automation Society Conference Editorial Board, as an Administrative Committee Member of the IEEE Robotics and Automation Society, and as Editor of IEEE Transactions on Robotics. She is committed to mentoring female computer scientists and engineers, and was the founding advisor of the "Systers: Women in EECS" student group at UTK. Prior to her NSF appointment, she regularly taught graduate and undergraduate classes in robotics, artificial intelligence, machine learning, and advanced algorithms. Dr. Parker received her Ph.D. degree from the Massachusetts Institute of Technology. She was awarded the PECASE (U.S. Presidential Early Career Award for Scientists and Engineers), and is a Fellow of IEEE.

Back to top


Katia Sycara

Carnegie Mellon University
Human Supervisory Control of Robotic Swarms

Abstract: Multiple UVs can be coordinated effectively by centralized mechanisms only for static, known environments. Where environments are unknown and unpredictable, preplanned coordination breaks down and UVs must coordinate autonomously and locally if they are to behave in concert. Swarm algorithms engender emergent global behaviors through coordinating interactions among UVs following simple control laws. This talk presents experiments in Human-Swarm Interaction comparing methods for influencing robotic swarms and propagating human influence as well as best timing of human input.

Biography: Katia Sycara is a Research Professor in the Robotics Institute at Carnegie MellonUniversity. She holds a B.S. in Applied Mathematics from Brown University, M.S. in Electrical Engineering from the University of Wisconsin and Ph.D. in Computer Science from Georgia Institute of Technology. She holds an Honorary Doctorate from the University of the Aegean. Prof. Sycara is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE), Fellow of the American Association for Artificial Intelligence (AAAI) and the recipient of the 2002 ACM/SIGART Agents Research Award. She is a past member of the Scientific Advisory Board of France Telecom, of the Scientific Advisory Board of the Greek National Center of Scientific Research "Demokritos" Information Technology Division. She has given numerous invited talks, and has authored or co-authored more than 550 technical papers dealing with Multiagent Systems. She has led multimillion dollar research efforts funded by DARPA, NASA, AFOSR, ONR, ARO, AFRL, NSF and industry. She co-founded the journal "Autonomous Agents and Multiagent Systems”, the premiere journal in this area of computer science and served as its initial Editor-in-chief. She is on the editorial board of 5 additional journals.

Back to top


Dawn Tilbury

University of Michigan
Improving the performance of teleoperated ground robots with communication delays

Abstract: Over the past few years, we have been working to characterize and improve the performance of small, teleoperated ground robots with communication delays between the operator and the remote environment. Through user studies with a simulated robot, we have developed a user model to characterize the behavior of the human operator. This user model can be used in simulation studies to investigate new situations. We have demonstrated how certain stochastic delay distributions can be approximated by a constant delay (in terms of user performance). Recognizing the limitations of pure teleoperation, we have investigated semi-autonomous behaviors such as obstacle avoidance and rollover prevention that can be included on the remote robot to augment the overall performance while maintaining user control.

Biography: Dawn M. Tilbury is currently the Associate Dean for Research in the College of Engineering, University of Michigan. She received the B.S. degree in Electrical Engineering from the University of Minnesota in 1989, and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1992 and 1994, respectively. In 1995, she joined the Mechanical Engineering Department at the University of Michigan, Ann Arbor, where she is currently Professor, with a joint appointment as Professor of EECS. Her research interests include distributed control of mechanical systems with network communication, logic control of manufacturing systems, reliability of ground robotics, and dynamic systems modeling of physiological systems. She was elected Fellow of the IEEE in 2008 and Fellow of the ASME in 2012, and is a Life Member of SWE.

Back to top


Carme Torras

Institut de Robòtica i Informàtica Industrial (CSIC-UPC)
Teaching robots to help humans with clothing

Abstract: Textile objects pervade human environments and their versatile manipulation by robots would open up a whole range of possibilities, from increasing the autonomy of elderly and disabled people, housekeeping and hospital logistics, to novel automation in the clothing Internet business. At the Perception and Manipulation Lab of IRI (CSIC-UPC), we are addressing several of the learning challenges arising in this context, such as visual garment recognition and pose estimation using appearance and depth data, learning human-robot collaboration from demonstrations, safe physical human-robot interaction, reinforcement tuning of skills, and symbolic learning to plan and act. The most representative of these works, together with their application to grasping garments in a task-suitable way, cleaning surfaces with a cloth and helping people to dress, will be showcased along the presentation.

Biography: Carme Torras (http://www.iri.upc.edu/people/torras) is Research Professor at the Spanish Scientific Research Council (CSIC). She received M.Sc. degrees in Mathematics and Computer Science from the Universitat de Barcelona and the University of Massachusetts, Amherst, respectively, and a Ph.D. degree in Computer Science from the Technical University of Catalonia (UPC). Prof. Torras has published five books and about two hundred papers in the areas of robot kinematics, computer vision, geometric reasoning, machine learning and manipulation planning. She has been local project leader of several European projects in the frontier between AI and Robotics, among which the FP6 IP project “Perception, Action and COgnition through Learning of Object-Action Complexes” (PACO-PLUS), and the FP7 STREP projects “GARdeNIng with a Cognitive System” (GARNICS) and “Intelligent observation and execution of Actions and manipulations” (IntellAct). She was awarded the Narcís Monturiol Medal of the Generalitat de Catalunya in 2000, and she became ECCAI Fellow in 2007, member of Academia Europaea in 2010, and member of the Royal Academy of Sciences and Arts of Barcelona in 2013. Prof. Torras is Editor of the IEEE Transactions on Robotics.

Back to top


Manuela Veloso

Carnegie Mellon University
Symbiotic Autonomous Mobile Service Robots

Abstract: We research on autonomous mobile robots that coexist and interact with humans while performing tasks. I will present symbiotic robot autonomy, in which robots are robustly autonomous in their localization and navigation, as well as handle their limitations by proactively asking for help from humans, accessing the web for missing knowledge, and coordinating with other robots. Such symbiotic autonomy has enabled our CoBot robots to move in our multi-floor buildings performing a variety of service tasks, including escorting visitors, and transporting packages between locations. The work is joint with my research group, in particular with Joydeep Biswas, Stephanie Rosenthal, Brian Coltin, and Vittorio Perera.

Biography: Manuela M. Veloso is the Herbert A. Simon University Professor in the Computer Science Department at Carnegie Mellon University, with courtesy appointments in the Robotics Institute, Machine Learning, Electrical and Computer Engineering, and Mechanical Engineering Departments. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is IEEE Fellow, AAAS Fellow, AAAI Fellow, and the past President of AAAI and RoboCup. Professor Veloso and her students have worked with a variety of autonomous robots, including mobile service robots and soccer robots. The CoBot service robots have autonomously navigated for more than 1,000km in multi-floor office buildings. See www.cs.cmu.edu/~mmv for further information, including publications.

Back to top


Jing Xiao

University of North Carolina at Charlotte
Autonomous Robotic Manipulation

Abstract: Autonomous manipulation remains one of the most challenging tasks for robots, especially in cluttered environments with uncertainty. In this talk, I’ll introduce our related research work in robotic manipulation, and in particular, manipulation using a flexible arm such as an elephant trunk robot, robotic assembly, and appearance-based 3-D object recognition and pose estimation in cluttered environments.

Biography: Jing Xiao received her Ph.D. degree in Computer, Information, and Control Engineering from the University of Michigan, Ann Arbor, Michigan, USA. She is a Professor of Computer Science, College of Computing and Informatics (CCI), University of North Carolina at Charlotte, USA. She is also the Site Director of the U.S. National Science Foundation (NSF) Industry/University Cooperative Research Center (I/UCRC) on Robots and Sensors for the Human Well-being. She served as the Associate Dean for Research and Graduate Programs of CCI for five years (1/2008-12/2012). She served as the Program Director of the Robotics and Human Augmentation Program at the U.S. National Science Foundation for two and half years (8/1998-12/2000). Jing Xiao’s research spans robotics, haptics, and intelligent systems. She has recently co-authored a monograph Haptic Rendering for Simulation of Fine Manipulation (Springer) and has over 130 publications in major robotics conferences, journals, and books and holds one patent. Jing Xiao is an IEEE Fellow. She currently serves as the Vice President for Member Activities of the IEEE Robotics and Automation Society.

Back to top


Yuru Zhang

Beihang University
iDental: A Simulator for Dental Skill Training

Abstract: Virtual reality based surgical training is an emerging area of research interests. We have developed iDental, a simulator with haptic-visual-audio feedback for dental skill training. The simulator aims to train dental students in their early stage of learning to acquire basic operational skills. Based on our unique haptic rendering methods, iDental achieved some important features, including 6 degree-of-freedom haptic feedback, deformable object simulation, bi-manual coordination, and the simulation of fine manipulation in a narrow oral cavity. In this talk, I will briefly introduce the functions and the features of iDental, discuss challenging problems in haptic rendering, present preliminary user evaluation results and highlight some future research and development topics.

Biography: Yuru Zhang is a professor in the School of Mechanical Engineering and Automation at Beihang University in Beijing where she served as the associate dean of the school, the associate director of Robotics Institute. Currently she is the associate director of the State Key Laboratory of Virtual Reality Technology and System. Her primary research interest is haptic human-machine interaction including haptic user interface, teletraining and neuro-haptics. She has published over 150 technical papers and holds 20 issued patents. She co-authored two books, "Robotic Dexterous Hands" funded by the National Science Foundation of China, and "Haptic Rendering for Simulation of Fine Manipulation" published by Springer. Professor Zhang is a senior member of IEEE and a member of ASME. She is on the Advisory Board for Teaching, the Ministry of Education, China. She was awarded the Outstanding Professional for the 21 Century by the Ministry of Education and the Excellent Investigator Award by the formal Ministry of Aeronautics Industry in China.

Back to top


Nancy Amato

Texas A&M University

Biography: Nancy M. Amato is Unocal Professor in the Department of Computer Science and Engineering at Texas A&M University where she co-directs the Parasol Lab. She received undergraduate degrees in Mathematical Sciences and Economics from Stanford University, and M.S. and Ph.D. degrees in Computer Science from UC Berkeley and the University of Illinois at Urbana-Champaign. Her main areas of research focus are motion planning and robotics, computational biology and geometry, and parallel and distributed computing. She was Editor-in-Chief of the IEEE/RSJ IROS Conference Paper Review Board from 2011-2013, has served on the editorial boards of the IEEE Transactions of Robotics and Automation, IEEE Transactions on Parallel and Distributed Computing. She is co-Chair of the Computing Research Association's Committee on the Status of Women in Computing Research (CRA-W) and was co-Chair of the NCWIT Academic Alliance (2009-2011). She was an AT&T Bell Laboratories PhD Scholar, received an NSF CAREER Award, is a Distinguished Speaker for the ACM Distinguished Speakers Program, and was a Distinguished Lecturer for the IEEE Robotics and Automation Society. She received the 2014 CRA A. Nico Haberman Award, the inaugural 2014 NCWIT Harrold and Notkin Research and Graduate Mentoring Award, the 2013 IEEE Hewlett-Packard/Harriet B. Rigas Award, a University-level teaching award, and the Betty M. Unterberger Award for Outstanding Service to Honors Education at Texas A&M. She is a AAAS Fellow and an IEEE Fellow.

Back to top


Marco Morales

Instituto Tecnológico Autónomo de México (ITAM)

Biography: Marco Morales is an Assistant Professor in the Department of Digital Systems at the Instituto Tecnológico Autónomo de México (ITAM), where he leads the Robotics Laboratory, and he is currently a Visiting Professor at the Department of Computer Science and Engineering at Texas A&M University. His main research interests are in motion planning and control in robotics. Morales received a Ph.D. in Computer Science from Texas A&M University, a M.S. in Electrical Engineering and a B.S. in Computer Engineering from Universidad Nacional Autónoma de México (UNAM). He received a Fulbright/García Robles scholarship to pursue his PhD, a CONACYT scholarship to pursue his Masters, and was a SuperComputing Scholar at UNAM. He has been member of the National System of Researchers in Mexico. He has served as Associate Editor for IEEE ICRA since 2011 and for IEEE/RSJ IROS since 2008. Morales was one of the chairs of the Eight International Workshop on the Algorithmic Foundations of Robotics, held in Guanajuato, México, in 2008. He is a founder member of the Mexican Federation of Robotics that promote robotics through events such as the Mexican Tournament of Robotics (of which he was chair in 2011), the Mexican School of Robotics, and the RoboCup that was brought to Mexico City in 2012.

Back to top


Sam Rodriguez

Texas A&M University

Biography: Sam Rodriguez is a Postdoctoral Research Associate in the Parasol Lab, Dept. of Computer Science and Engineering at Texas A&M University working with Dr. Nancy Amato. My work is focused on improving group behaviors in multi-agent systems by incorporating motion planning techniques with an agent-based approach. In our system, agents can have a range of behaviors, capabilities, levels of communication, knowledge of the environment and coordination . We investigate how agents can work cooperatively to perform tasks, plan paths in dynamic environments, and influence other groups of agents in an environment. The specific applications I have worked on are pursuit-evasion scenarios, evacuation behaviors and shepherding techniques.

Back to top


Dylan Shell

Texas A&M University

Biography: Dylan Shell is an assistant professor of computer science and engineering at Texas A&M University in College Station, Texas. He received his BSc degree in computational & applied mathematics and computer science from the University of the Witwatersrand, South Africa, and his M.S. and Ph.D. in Computer Science from the University of Southern California. His research aims to synthesize and analyze complex, intelligent behavior in distributed systems that exploit their physical embedding to interact with the physical world.

He has published papers on multi-robot task allocation, robotics for emergency scenarios, biologically inspired multiple robot systems, multi-robot routing, estimation of group-level swarm properties, statistical mechanics for robot swarms, minimalist manipulation, wireless communication models for robot systems, interpolation for adaptive robotic sampling, rigid-body simulation and contact models, human-robot interaction, and robotic theatre.

Back to top


Dezhen Song

Texas A&M University

Biography: Dezhen Song is an Associate Professor with Texas A&M University, College Station, Texas, TX, USA. Song received his Ph.D. in 2004 from University of California, Berkeley, MS and BS from Zhejiang University in 1995 and 1998, respectively. Song's primary research area is networked robotics, distributed sensing, computer vision, surveillance, and stochastic modeling. Dr. Song received the Kayamori Best Paper Award of the 2005 IEEE International Conference on Robotics and Automation (with J. Yi and S. Ding). He received NSF Faculty Early Career Development (CAREER) Award in 2007. From 2008 to 2012, Song was an associate editor of IEEE Transactions on Robotics. Song is currently an Associate Editor of IEEE Transactions on Automation Science and Engineering.

Back to top


Shawna Thomas

Texas A&M University

Biography: Shawna Thomas is a Postdoctoral Research Associate in the Department of Computer Science and Engineering at Texas A&M University working with Dr. Nancy M. Amato in the Parasol Lab. She received her Ph.D. in Computer Science in 2010 from Texas A&M University. Her research focus is on randomized motion planning algorithms and their application to problems in computational biology. She is also interested in the supporting areas of scientific visualization, physically-based modeling, and parallel computing. She is currently a co-PI on an NSF grant (2014–2017). Her graduate work was supported in part by a NSF Graduate Research Fellowship (2002-2005), a P.E.O. Scholarship (2005-2006), a DOE GAANN Fellowship (2006-2007), and an IBM Ph.D. Fellowship (2007-2009). More information about Shawna Thomas' research and publications can be found at http://parasol.tamu.edu/~sthomas.

Back to top


For More Information

Contact This email address is being protected from spambots. You need JavaScript enabled to view it. with any questions.