This study enables mobile grasping for commercial robots using self-supervised learning to adjust velocity and grasp based on object shape. It simplifies the task into three action primitives, reducing data sparsity. Three FCNs predict grasp actions and correct motion errors. A two-stage learning approach improves accuracy, and randomized simulations enhance generalization across various objects and environments.
This study proposes a pick-and-toss (PT) method as an efficient alternative to pick-and-place (PP), extending a robot's range. While PT enhances object arrangement efficiency, placement conditions affect toss accuracy. To optimize this, we suggest selecting PP or PT based on task difficulty derived from the environment. Our method combines self-supervised learning for toss motion and a brute-force search for task determination. Simulations and real-world tests on arranging rectangular objects validate this approach.
We tackle multi-objective optimization for planning uncertainty-aware sequences and motions in assembling complex mechanical products with many contact points. It takes CAD models as input, simulates assembly, and employs an NSGA-III-inspired algorithm to optimize the order, placement, transitions, grasps, and trajectories for robot execution. The approach integrates ConCERRT for handling uncertainties, achieving high success rate in assembly planning for a chainsaw, while reducing simulation uncertainty.
The perceptive soft jig extended in this study is equipped with a hydraulic drive system and enables parts-fixing by creating a jammed state while maintaining optical transparency, thereby facilitating visual sensing of the jig's membrane from camera sensors embedded in the jig. Furthermore, we proposed a sensing method to estimate the fixed object pose based on the behaviors of markers created on the jig's inner surface.
We introduce a new perspective on similarity matching between novel objects and known objects in a database based on category-association to achieve pick-and-place tasks with high accuracy and stability. We calculate category name similarity using word embedding to quantify the semantic similarity between the categories of the known and target real-world objects. Using a similar model identified by a similarity prediction function, we preplan a series of robust grasps and imitate them to plan new grasps for real-world target objects.
To replace the human role of robust detection and agile manipulation of waste items by robots, we propose three methods: a graspless manipulation method for agile waste sorting, an automatic object image dataset collection method, and a method to mitigate the differences in the appearance of target objects from dataset collection and waste sorting scenes. If differences exist, the performance of a trained waste detector can decrease. We address the differences in illumination and background using several computer vision technologies.
Aiming to generate easy-to-handle assembly sequences for robotic assembly, we propose a multiobjective genetic algorithm to balance several objectives for generating constraint-satisfied and preferable assembly sequences. Furthermore, we developed a method of extracting part relation matrices using 3D computer-aided design (CAD) models.
To design a flexible assembly system that can handle objects of various shapes, we propose a jamming-gripper-inspired soft jig that is capable of deforming according to the shape of assembly parts. The soft jig has a flexible membrane made of silicone, which has a high friction, elongation, and contraction rate to keep parts strictly fixed. The inside of the membrane is filled with glass beads to achieve jamming transition.
Training deep-learning-based vision systems requires time-consuming and laborious manual annotations. To automate the annotation, we associate one visual marker with one object and capture them in the same image. However, if an image showing the marker is used for training, the neural network normally learns the marker as a feature of the object. By hiding the marker with a noise mask, we succeeded in reducing erroneous learning.
Assistant Professor (2023-present)
Reconfigurable Manipulation Robots
Visiting Researcher (2023-2024) -
Institute of Robotics and Mechatronics, German Aerospace Center (DLR)
CAD-Informed Uncertainty-Aware Robotic Assembly
Specially-Appointed Assistant Professor (2021-2023)
Novel Object Manipulation
Specially-Appointed Assistant Professor (2022-2023) - Robot Learning Laboratory in NAIST
Soft Robotic Assembly
Specially-Appointed Assistant Professor (2021-2022) - Human Robotics Laboratory in NAIST
Quickly Deplyable Waste Sorter
Ph.D. Student (2018-2021)
Agile Reconfigurable Robotic Assembly System
Research Intern (2018-2020) -
Microsoft Development Applied Robotics Research Team
Interactive-Learning-from-Observation (contributed to LabanotationSuite)
Super Creator (certified by METI & IPA) -
MITOU program 2018
Quickly Deployable Image Recognition AI
Master Cource Student (2016-2018)
Tactile-Based Pouring Motion Inspired by Human Skill
Advanced Cource Student (2014-2016)
Modeling Forearm Pro-supination for Baseball Pitching Analysis
Technical College Student (2009-2014)
Artificial-Muscle-Driven Robotic Arm with Biarticular Muscles