Training deep-learning-based vision systems requires time-consuming and laborious manual annotations. To automate the annotation, we associate one visual marker with one object and capture them in the same image. However, if an image showing the marker is used for training, the neural network normally learns the marker as a feature of the object. By hiding the marker with a noise mask, we succeeded in reducing erroneous learning.
To design a flexible assembly system that can handle objects of various shapes, we propose a jamming-gripper-inspired soft jig that is capable of deforming according to the shape of assembly parts. The soft jig has a flexible membrane made of silicone, which has a high friction, elongation, and contraction rate to keep parts strictly fixed. The inside of the membrane is filled with glass beads to achieve jamming transition.
Aiming to generate easy-to-handle assembly sequences for robotic assembly, we propose a multiobjective genetic algorithm to balance several objectives for generating constraint-satisfied and preferable assembly sequences. Furthermore, we developed a method of extracting part relation matrices using 3D computer-aided design (CAD) models.
To replace the human role of robust detection and agile manipulation of waste items by robots, we propose three methods: a graspless manipulation method for agile waste sorting, an automatic object image dataset collection method, and a method to mitigate the differences in the appearance of target objects from dataset collection and waste sorting scenes. If differences exist, the performance of a trained waste detector can decrease. We address the differences in illumination and background using several computer vision technologies.
We introduce a new perspective on similarity matching between novel objects and a known database based on category-association to achieve pick-and-place tasks with high accuracy and stabilization. We calculate category name similarity using word embedding to quantify the semantic similarity between the categories of known models and the target real-world objects. Using a similar model identified by a similarity prediction function, we preplan a series of robust grasps and imitate them to plan new grasps for real-world target objects.
The perceptive soft jig extended in this study is equipped with a hydraulic drive system and enables parts-fixing by creating a jammed state while maintaining optical transparency, thereby facilitating visual sensing of the jig's membrane from camera sensors embedded in the jig. Furthermore, we proposed a sensing method to estimate the fixed object pose based on the behaviors of markers created on the jig's inner surface.
Assistant Professor (2023-present)
Research on robot working intelligence
Specially-Appointed Assistant Professor (2021-2023)
Research on robot manipulation planning
Specially-Appointed Assistant Professor (2022-2023) - Robot Learning Laboratory in NAIST
Research on robotic assembly
Specially-Appointed Assistant Professor (2021-2022) - Human Robotics Laboratory in NAIST
Research on robotic waste sorting
Ph.D. Student (2018-2021)
Agile reconfigurable robotic assembly system
Research Intern (2018-2020) -
Microsoft Development Applied Robotics Research Team
Research on interactive-learning-from-observation (contributed to LabanotationSuite)
Super Creator (certified by METI & IPA) -
MITOU program 2018
A framework for quickly deploying image recognition AI
Master Cource Student (2016-2018)
Tactile-based pouring motion inspired by human skill
Advanced Cource Student (2014-2016)
Modeling forearm pro-supination to analyze baseball pitching
Technical College Student (2009-2014)
Artificial muscle-driven robotic arm with biarticular muscles