Please wait a minute...
Frontiers of Mechanical Engineering

ISSN 2095-0233

ISSN 2095-0241(Online)

CN 11-5984/TH

Postal Subscription Code 80-975

2018 Impact Factor: 0.989

Front. Mech. Eng.    2021, Vol. 16 Issue (3) : 528-545    https://doi.org/10.1007/s11465-021-0638-2
RESEARCH ARTICLE
A novel task-oriented framework for dual-arm robotic assembly task
Zhengwei WANG1, Yahui GAN1(), Xianzhong DAI1
1. School of Automation, Southeast University, Nanjing 210096, China
2. Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China
 Download: PDF(18847 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

In industrial manufacturing, the deployment of dual-arm robots in assembly tasks has become a trend. However, making the dual-arm robots more intelligent in such applications is still an open, challenging issue. This paper proposes a novel framework that combines task-oriented motion planning with visual perception to facilitate robot deployment from perception to execution and finish assembly problems by using dual-arm robots. In this framework, visual perception is first employed to track the effects of the robot behaviors and observe states of the workpieces, where the performance of tasks can be abstracted as a high-level state for intelligent reasoning. The assembly task and manipulation sequences can be obtained by analyzing and reasoning the state transition trajectory of the environment as well as the workpieces. Next, the corresponding assembly manipulation can be generated and parameterized according to the differences between adjacent states by combining with the prebuilt knowledge of the scenarios. Experiments are set up with a dual-arm robotic system (ABB YuMi and an RGB-D camera) to validate the proposed framework. Experimental results demonstrate the effectiveness of the proposed framework and the promising value of its practical application.

Keywords dual-arm assembly      AI reasoning      intelligent system      task-oriented motion planning      visual perception     
Corresponding Author(s): Yahui GAN   
Just Accepted Date: 02 August 2021   Online First Date: 31 August 2021    Issue Date: 24 September 2021
 Cite this article:   
Zhengwei WANG,Yahui GAN,Xianzhong DAI. A novel task-oriented framework for dual-arm robotic assembly task[J]. Front. Mech. Eng., 2021, 16(3): 528-545.
 URL:  
https://academic.hep.com.cn/fme/EN/10.1007/s11465-021-0638-2
https://academic.hep.com.cn/fme/EN/Y2021/V16/I3/528
Fig.1  Sketch of dual-arm assembly scenario.
Fig.2  Proposed deep perception pipeline for assembly scenario visual processing.
Fig.3  Models with prior knowledge: (a) a pair of assemblable models characterized by a cylinder; (b) a pair of assemblable models characterized by a quadrangular prism; (c) a pair of assemblable models characterized by a hexagonal prism; and (d) a pair of assemblable models characterized by a triangular prism. The model marked with left/right means it can be manipulated only by left/right arm of the robot.
Fig.4  Illustration of the graph structure of G k.
Fig.5  
Fig.6  Diagram of an example of dual-arm collaborative assembly scenario.
Fig.7  State transition diagram for the global and local state changes.
Seq Operation Description
1 sense Assembly environment perception
2 pick Object grabbing action
3 place Object placement action
4 pose Detect and calculate pose differences
5 assemb Assembly posture alignment
Tab.1  Operations of state transition mechanism
Fig.8  Causalities among states and operations. The workpieces’ states are labelled in the round box, the solid directed arcs are operations, the dash lines in red represent the coordinate relations, and the dash lines in green mean the results of the sense operation.
Fig.9  
Fig.10  
Fig.11  Simulation platform for dual-arm coordinated assembly.
Fig.12  Schematic diagram of three-layer structure from objects to graphs. The bottom layer is the workpieces, the middle layer shows the identified point cloud models of the corresponding workpiece, and the top layer is the abstracted graph representation of the workpieces in the scene.
Fig.13  Snapshots of dual-arm cooperative assembly. The time below each figure is the total time consumed by the robot system to execute to the current action.
Fig.14  Local process of state transition of workpieces in the dual-arm collaborative assembly.
Fig.15  Motion curves during the assembly: (a) left manipulator, and (b) right manipulator.
Fig.16  Dual-arm robotic system for assembly tasks in the real world.
Fig.17  Initial point cloud scene of the workbench used for the dual-arm assembly task.
Fig.18  Synchronized data of the color image and point cloud data.
Fig.19  Clustered data of point cloud.
Fig.20  Recognition results with estimated pose of each workpiece.
Fig.21  Snapshots of physical dual-arm cooperative assembly: (a) initial state of the world, (b) preparation for grasping targets, (c) grabbing target objects, (d) retracting to the prepare position, (e) adjusting the assembly axis, (f) assembly precession, (g) object release, (h) assembly adjustment operation, and (i) assembly completed.
Variables
a i Action name
A Set of actions
A s e q A sequence of actions
B Basic information about the instance
D Type identifier
D m Domain-specific model
E k Edges of a graph
G Target world state specified by the task
G k Graph representation of the world
G N k Graph representation of a target and its obstacles
L Limitation conditions
M Set of pairs of actions and its corresponding parameters
N k Nodes of a graph
O General representation of the workpieces
O k i ( i N ? ) Instance of a type of workpiece
p a r a s i Parameter list
s World state that interested at a time point
S State representation for workpieces
W World physical properties
z World state that interested
z d e s Destination of the world state
z i n i Initial world state
z t r j A series of transitions in world state
  
Σ Mechanisms that drive the world changes
Abbreviations
AERM Assembly environment representation model
AI Artificial intelligence
FSA Functional sequence of actions
STRIPS Stanford Research Institute Problem Solver
  
1 L Kunze, N Hawes, T Duckett. Artificial intelligence for long-term robot autonomy: a survey. IEEE Robotics and Automation Letters, 2018, 3( 4): 4023– 4030
https://doi.org/10.1109/LRA.2018.2860628
2 Pairet È, Ardón P, Broz F, et al. Learning and generalisation of primitives skills towards robust dual-arm manipulation. In: Proceedings of the AAAI Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy. Palo Alto: AAAI Press, 2018, 5‒12
3 M Diab, A Akbari, M Ud Din. PMK—A knowledge processing framework for autonomous robotics perception and manipulation. Sensors (Basel), 2019, 19( 5): 1166– 1189
https://doi.org/10.3390/s19051166
4 M Kyrarini, M A Haseeb, D Ristić-Durrant. Robot learning of industrial assembly task via human demonstrations. Autonomous Robots, 2019, 43( 1): 239– 257
https://doi.org/10.1007/s10514-018-9725-6
5 C Y Weng, W C Tan, Q Yuan. Quantitative assessment at task-level for performance of robotic configurations and task plans. Journal of Intelligent & Robotic Systems, 2019, 96( 3–4): 439– 456
https://doi.org/10.1007/s10846-019-01005-1
6 Solana Y, Cueva H H, Garcia A R, et al. A case study of automated dual-arm manipulation in industrial applications. In: Proceedings of 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). Zaragoza: IEEE, 2019, 563‒570
7 N Kousi, D Dimosthenopoulos, A S Matthaiakis. AI based combined scheduling and motion planning in flexible robotic assembly lines. Procedia CIRP, 2019, 86 : 74– 79
https://doi.org/10.1016/j.procir.2020.01.041
8 P Tsarouchi, S Makris, G Michalos. Robotized assembly process using dual arm robot. Procedia CIRP, 2014, 23( 3): 47– 52
https://doi.org/10.1016/j.procir.2014.10.078
9 M Dogar, A Spielberg, S Baker. Multi-robot grasp planning for sequential assembly operations. Autonomous Robots, 2019, 43( 3): 649– 664
https://doi.org/10.1007/s10514-018-9748-z
10 C Smith, Y Karayiannidis, L Nalpantidis. Dual arm manipulation—a survey. Robotics and Autonomous Systems, 2012, 60( 10): 1340– 1353
https://doi.org/10.1016/j.robot.2012.07.005
11 C Breazeal, B Scassellati. Robots that imitate humans. Trends in Cognitive Sciences, 2002, 6( 11): 481– 487
https://doi.org/10.1016/S1364-6613(02)02016-8
12 H Schleihauf, S Hoehl. A dual-process perspective on over-imitation. Developmental Review, 2020, 55 : 100896–
https://doi.org/10.1016/j.dr.2020.100896
13 D Li, N Liu, Y Guo. 3D object recognition and pose estimation for random bin-picking using Partition Viewpoint Feature Histograms. Pattern Recognition Letters, 2019, 128 : 148– 154
https://doi.org/10.1016/j.patrec.2019.08.016
14 Y Cong, D Tian, Y Feng. Speedup 3-D texture-less object recognition against self-occlusion for intelligent manufacturing. IEEE Transactions on Cybernetics, 2019, 49( 11): 3887– 3897
https://doi.org/10.1109/TCYB.2018.2851666
15 Nakano Y. Stereo Vision Based Single-Shot 6D Object Pose Estimation for Bin-Picking by a Robot Manipulator. 2020, arXiv preprint arXiv: 2005.13759
16 Z Wang, Y Gan, X Dai. An environment state perception method based on knowledge representation in dual arm robot assembly tasks. International Journal of Intelligent Robotics and Applications, 2020, 4( 2): 177– 190
https://doi.org/10.1007/s41315-020-00128-1
17 M Fox, D Long. PDDL2. 1: an extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research, 2003, 20 : 61– 124
https://doi.org/10.1613/jair.1129
18 O Adjali, A Ramdane-Cherif. Knowledge processing using EKRL for robotic applications. International Journal of Cognitive Informatics and Natural Intelligence, 2017, 11( 4): 1– 21
https://doi.org/10.4018/IJCINI.2017100101
19 Diab M, Muhayyuddin, Akbari A, et al. An ontology framework for physics-based manipulation planning. In: Ollero A, Sanfeliu A, Montano L, et al., eds. ROBOT 2017: Third Iberian Robotics Conference. Cham: Springer, 2017, 452‒464
20 Rodríguez C, Suárez R. Combining motion planning and task assignment for a dual-arm system. In: Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon: IEEE, 2016, 4238‒4243
21 M Tenorth, M Beetz. Representations for robot knowledge in the KnowRob framework. Artificial Intelligence, 2017, 247 : 151– 169
https://doi.org/10.1016/j.artint.2015.05.010
22 R E Fikes, N J Nilsson. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 1971, 2( 3‒4): 189– 208
https://doi.org/10.1016/0004-3702(71)90010-5
23 Ghallab M, Nau D, Traverso P. Automated Planning: Theory and Practice. Amsterdam: Elsevier, 2004
24 J Hoffmann, B Nebel. The FF planning system: fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 2001, 14 : 253– 302
https://doi.org/10.1613/jair.855
25 M Moll, L Kavraki, J Rosell. Randomized physics-based motion planning for grasping in cluttered and uncertain environments. IEEE Robotics and Automation Letters, 2017, 3( 2): 712– 719
26 A Montaño, R Suárez. Coordination of several robots based on temporal synchronization. Robotics and Computer-integrated Manufacturing, 2016, 42 : 73– 85
https://doi.org/10.1016/j.rcim.2016.05.008
27 C Rodríguez, A Montaño, R Suárez. Planning manipulation movements of a dual-arm system considering obstacle removing. Robotics and Autonomous Systems, 2014, 62( 12): 1816– 1826
https://doi.org/10.1016/j.robot.2014.07.003
28 Tenorth M, Bartels G, Beetz M. Knowledge-based Specification of Robot Motions. In: ECAI. 2014, 873‒878
29 J Bidot, L Karlsson, F Lagriffoul. Geometric backtracking for combined task and motion planning in robotic systems. Artificial Intelligence, 2017, 247 : 229– 265
https://doi.org/10.1016/j.artint.2015.03.005
30 A Akbari, F Lagriffoul, J Rosell. Combined heuristic task and motion planning for bi-manual robots. Autonomous Robots, 2019, 43( 6): 1575– 1590
https://doi.org/10.1007/s10514-018-9817-3
31 Erdem E, Haspalamutgil K, Palaz C, et al. Combining high-level causal reasoning with low-level geometric reasoning and motion planning for robotic manipulation. In: Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shangai: IEEE, 2011, 4575‒4581
32 Plaku E, Hager G D. Sampling-based motion and symbolic action planning with geometric and differential constraints. In: Proceedings of 2010 IEEE International Conference on Robotics and Automation. Anchorage: IEEE, 2010, 5002‒5008
33 Hauser K, Latombe J C. Integrating task and PRM motion planning: dealing with many infeasible motion planning queries. In: Proceedings of ICAPS09 Workshop on Bridging the Gap between Task and Motion Planning. Thessaloniki: Citeseer, 2009
34 S Cambon, R Alami, F Gravot. A hybrid approach to intricate motion, manipulation and task planning. International Journal of Robotics Research, 2009, 28( 1): 104– 126
https://doi.org/10.1177/0278364908097884
35 Srivastava S, Fang E, Riano L, et al. Combined task and motion planning through an extensible planner-independent interface layer. In: Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong: IEEE, 2014, 639‒646
36 Dornhege C, Gissler M, Teschner M, et al. Integrating symbolic and geometric planning for mobile manipulation. In: Proceedings of 2009 IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009). Denver: IEEE, 2009, 1–6
37 Wolfe J, Marthi B, Russell S J. Combined task and motion planning for mobile manipulation. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS). Toronto: AAAI, 2010, 254–258
38 Dornhege C, Eyerich P, Keller T, et al. Integrating task and motion planning using semantic attachments. In: Proceedings of the 1st AAAI Conference on Bridging the Gap Between Task and Motion Planning. Atlanta: AAAI, 2010, 10‒17
39 Gaschler A, Petrick R P A, Giuliani M, et al. KVP: a knowledge of volumes approach to robot task planning. In: Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013, 202‒208
40 Kaelbling L P, Lozano-Pérez T. Hierarchical task and motion planning in the now. In: Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai: IEEE, 2011, 1470‒1477
41 de Silva L, Pandey A K, Gharbi M, et al. Towards combining HTN planning and geometric task planning. Computer Science, 2013, arXiv preprint arXiv: 1307.1482
42 L P Kaelbling, T Lozano-Pérez. Integrated task and motion planning in belief space. International Journal of Robotics Research, 2013, 32( 9–10): 1194– 1227
https://doi.org/10.1177/0278364913484072
43 Srivastava S, Riano L, Russell S, et al. Using classical planners for tasks with continuous operators in robotics. In: Proceedings of International Conference on Automated Planning and Scheduling. Guangzhou: IEEE, 2013, 3
44 M Dogar, S Srinivasa. A framework for push-grasping in clutter. Robotics Science and Systems: Online Proceedings, 2011, VII : 65– 72
45 K Hauser. The minimum constraint removal problem with three robotics applications. International Journal of Robotics Research, 2014, 33( 1): 5– 17
https://doi.org/10.1177/0278364913507795
46 Krontiris A, Bekris K E. Efficiently solving general rearrangement tasks: a fast extension primitive for an incremental sampling-based planner. In: Proceedings of 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm: IEEE, 2016, 3924‒3931
47 R Dearden, C Burbridge. Manipulation planning using learned symbolic state abstractions. Robotics and Autonomous Systems, 2014, 62( 3): 355– 365
https://doi.org/10.1016/j.robot.2013.09.015
48 Leidner D, Borst C. Hybrid reasoning for mobile manipulation based on object knowledge. In: Proceedings of Workshop on AI-based robotics at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Tokyo: IEEE, 2013
49 H Chen, J Li, W Wan. Integrating combined task and motion planning with compliant control. International Journal of Intelligent Robotics and Applications, 2020, 4( 2): 149– 163
https://doi.org/10.1007/s41315-020-00136-1
50 Moriyama R, Wan W W, Harada K. Dual-arm assembly planning considering gravitational constraints. In: Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Macao: IEEE, 2019, 5566‒5572
51 W Wan, K Harada, K Nagata. Assembly sequence planning for motion planning. Assembly Automation, 2018, 38( 2): 195– 206
https://doi.org/10.1108/AA-01-2017-009
52 Coleman D, Sucan I, Chitta S, et al. Reducing the barrier to entry of complex robotic software: a MoveIT! case study. Computer Science, 2014, arXiv preprint arXiv: 1404.3785
53 Foote T. Tf: the transform library. In: Proceedings of 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA). Woburn: IEEE, 2013, 1‒6
[1] Lixin GAO, Lijuan WU, Yan WANG, Houpei WEI, Hui YE. Intelligent fault diagnostic system based on RBR for the gearbox of rolling mills[J]. Front Mech Eng Chin, 2010, 5(4): 483-490.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed