Do you want to publish a course? Click here

Assembly Planning by Recognizing a Graphical Instruction Manual

127   0   0.0 ( 0 )
 Added by Zhenting Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper proposes a robot assembly planning method by automatically reading the graphical instruction manuals design for humans. Essentially, the method generates an Assembly Task Sequence Graph (ATSG) by recognizing a graphical instruction manual. An ATSG is a graph describing the assembly task procedure by detecting types of parts included in the instruction images, completing the missing information automatically, and correcting the detection errors automatically. To build an ATSG, the proposed method first extracts the information of the parts contained in each image of the graphical instruction manual. Then, by using the extracted part information, it estimates the proper work motions and tools for the assembly task. After that, the method builds an ATSG by considering the relationship between the previous and following images, which makes it possible to estimate the undetected parts caused by occlusion using the information of the entire image series. Finally, by collating the total number of each part with the generated ATSG, the excess or deficiency of parts are investigated, and task procedures are removed or added according to those parts. In the experiment section, we build an ATSG using the proposed method to a graphical instruction manual for a chair and demonstrate the action sequences found in the ATSG can be performed by a dual-arm robot execution. The results show the proposed method is effective and simplifies robot teaching in automatic assembly.



rate research

Read More

The authors propose the implementation of hybrid Fuzzy Logic-Genetic Algorithm (FL-GA) methodology to plan the automatic assembly and disassembly sequence of products. The GA-Fuzzy Logic approach is implemented onto two levels. The first level of hybridization consists of the development of a Fuzzy controller for the parameters of an assembly or disassembly planner based on GAs. This controller acts on mutation probability and crossover rate in order to adapt their values dynamically while the algorithm runs. The second level consists of the identification of theoptimal assembly or disassembly sequence by a Fuzzy function, in order to obtain a closer control of the technological knowledge of the assembly/disassembly process. Two case studies were analyzed in order to test the efficiency of the Fuzzy-GA methodologies.
In automated manufacturing, robots must reliably assemble parts of various geometries and low tolerances. Ideally, they plan the required motions autonomously. This poses a substantial challenge due to high-dimensional state spaces and non-linear contact-dynamics. Furthermore, object poses and model parameters, such as friction, are not exactly known and a source of uncertainty. The method proposed in this paper models the task of parts assembly as a belief space planning problem over an underlying impedance-controlled, compliant system. To solve this planning problem we introduce an asymptotically optimal belief space planner by extending an optimal, randomized, kinodynamic motion planner to non-deterministic domains. Under an expansiveness assumption we establish probabilistic completeness and asymptotic optimality. We validate our approach in thorough, simulated and real-world experiments of multiple assembly tasks. The experiments demonstrate our planners ability to reliably assemble objects, solely based on CAD models as input.
Robotic assembly planning has the potential to profoundly change how buildings can be designed and created. It enables architects to explicitly account for the assembly process already during the design phase, and enables efficient building methods that profit from the robots different capabilities. Previous work has addressed planning of robot assembly sequences and identifying the feasibility of architectural designs. This paper extends previous work by enabling assembly planning with large, heterogeneous teams of robots. We present a scalable planning system which enables parallelization of complex task and motion planning problems by iteratively solving smaller sub-problems. Combining optimization methods to solve for manipulation constraints with a sampling-based bi-directional space-time path planner enables us to plan cooperative multi-robot manipulation with unknown arrival-times. Thus, our solver allows for completing sub-problems and tasks with differing timescales and synchronizes them effectively. We demonstrate the approach on multiple case-studies and on two long-horizon building assembly scenarios to show the robustness and scalability of our algorithm.
As the number of robots in our daily surroundings like home, office, restaurants, factory floors, etc. are increasing rapidly, the development of natural human-robot interaction mechanism becomes more vital as it dictates the usability and acceptability of the robots. One of the valued features of such a cohabitant robot is that it performs tasks that are instructed in natural language. However, it is not trivial to execute the human intended tasks as natural language expressions can have large linguistic variations. Existing works assume either single task instruction is given to the robot at a time or there are multiple independent tasks in an instruction. However, complex task instructions composed of multiple inter-dependent tasks are not handled efficiently in the literature. There can be ordering dependency among the tasks, i.e., the tasks have to be executed in a certain order or there can be execution dependency, i.e., input parameter or execution of a task depends on the outcome of another task. Understanding such dependencies in a complex instruction is not trivial if an unconstrained natural language is allowed. In this work, we propose a method to find the intended order of execution of multiple inter-dependent tasks given in natural language instruction. Based on our experiment, we show that our system is very accurate in generating a viable execution plan from a complex instruction.
Manipulations of a constrained object often use a non-rigid grasp that allows the object to rotate relative to the end effector. This orientation slip strategy is often present in natural human demonstrations, yet it is generally overlooked in methods to identify constraints from such demonstrations. In this paper, we present a method to model and recognize prehensile orientation slip in human demonstrations of constrained interactions. Using only observations of an end effector, we can detect the type of constraint, parameters of the constraint, and orientation slip properties. Our method uses a novel hierarchical model selection method that is informed by multiple origins of physics-based evidence. A study with eight participants shows that orientation slip occurs in natural demonstrations and confirms that it can be detected by our method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا