Hostname: page-component-89b8bd64d-x2lbr Total loading time: 0 Render date: 2026-05-06T15:56:35.904Z Has data issue: false hasContentIssue false

Vision-based kinematic analysis of the Delta robot for object catching

Published online by Cambridge University Press:  19 October 2021

Sachin Kansal*
Affiliation:
Thapar Institute of Engineering Technology, Patiala, 147004, India
Sudipto Mukherjee
Affiliation:
Indian Institute of Technology Delhi, New Delhi, 110016, India
*
*Corresponding author. E-mail: sachinkansal87@gmail.com
Rights & Permissions [Opens in a new window]

Summary

This paper proposes a vision-based kinematic analysis and kinematic parameters identification of the proposed architecture, designed to perform the object catching in the real-time scenario. For performing the inverse kinematics, precise estimation of the link lengths and other parameters needs to be present. Kinematic identification of Delta based upon Model10 implicit model with ten parameters using the iterative least square method is implemented. The loop closure implicit equations have been modelled. In this paper, a vision-based kinematic analysis of the Delta robots to do the catching is discussed. A predefined library of ArUco is used to get a unique solution of the kinematics of the moving platform with respect to the fixed base. The re-projection error while doing the calibration in the vision sensor module is 0.10 pixels. Proposed architecture interfaced with the hardware using the PID controller. Encoders are quadrature and have a resolution of 0.15 degrees embedded in the experimental setup to make the system closed-loop (acting as feedback unit).

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press
Figure 0

Figure 1. Vision-based kinematic identification experimental setup [39–41].

Figure 1

Algorithm 1 Extracting the pose of the manipulating object [cube], that is three translations and three rotations, is 6-DOF with respect to the fixed base coordinates system using the camera as a vision sensor. After that, inverse kinematics of all three Delta robots is performed, estimating the thetas [$\theta$1 to $\theta$9] from all the three Delta robots, and feeding the calculated thetas to the controller of the manipulating system is performed. Getting feedback from the encoder and PID control scheme is also implemented. The tuning of the PID values is achieved at every cycle using the optimizing technique. Validation of the kinematic analysis and identification results with the experimental setup is also achieved.

Figure 2

Figure 2. Three Delta combined system architecture.

Figure 3

Figure 3. Mechanical structure of individual manipulator.

Figure 4

Figure 4. Schematic diagram of parallel mechanism.

Figure 5

Figure 5. Delta Robot kinematic representation.

Figure 6

Figure 6. Delta robot forward position kinematics diagram.

Figure 7

Figure 7. Experimental setup for Pose estimation.

Figure 8

Figure 8. Closer view of experimental setup for Pose estimation.

Figure 9

Figure 9. Validation of actual and estimated Angle Theta1 (in degrees).

Figure 10

Figure 10. Validation of actual and estimated Angle Theta2 (in degrees).

Figure 11

Figure 11. Validation of actual and estimated Angle Theta3 (in degrees).

Figure 12

Figure 12. Validation of actual and estimated end-effector position (X-axis) with respect to the Delta frame.

Figure 13

Figure 13. Desired and measured end-effector (platform) position.

Figure 14

Figure 14. Error in measured and desired end-effector (platform) position.

Figure 15

Figure 15. (a). A planar circular path is provided to the Delta robot. (b). Desired and measured end effector (circle trajectory) position (d). Measured angle for Leg1, Leg2 and Leg3 in circle profile. (e). Error estimation of the angles of Leg1, Leg2 and Leg3 for Delta1.

Figure 16

Figure 16. (a). A helical trajectory path is provided to the Delta robot. (b). Desired and measured end-effector (helical trajectory) position. (c). Desired angle for Leg1, Leg2 and Leg3 in helical profile. (d). Measured angle for Leg1, Leg2 and Leg3 in helical profile. (e). Error estimation of the angles of Leg1, Leg2 and Leg3 for Delta1

Figure 17

Figure 17. (a). Freefall catching of a cube in space (virtual). (b). Freefall catching of a cube in space (real time). (c). Downward fall trajectory of a cube.

Figure 18

Figure 18. Uncertainty in back-projected world coordinates.