Hostname: page-component-848d4c4894-nmvwc Total loading time: 0 Render date: 2024-06-14T12:23:59.821Z Has data issue: false hasContentIssue false

A 94.1 g scissors-type dual-arm cooperative manipulator for plant sampling by an ornithopter using a vision detection system

Published online by Cambridge University Press:  26 June 2023

Saeed Rafee Nekoo
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
Daniel Feliu-Talegon
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
Raul Tapia
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
Alvaro C. Satue
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
Jose Ramiro Martínez-de Dios
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
Anibal Ollero*
Affiliation:
GRVC Robotics Laboratory, Departamento de Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Seville, Spain
*
Corresponding author: Anibal Ollero; Email: aollero@us.es
Rights & Permissions [Opens in a new window]

Abstract

The sampling and monitoring of nature have become an important subject due to the rapid loss of green areas. This work proposes a possible solution for a sampling method of the leaves using an ornithopter robot equipped with an onboard 94.1 g dual-arm cooperative manipulator. One hand of the robot is a scissors-type arm and the other one is a gripper to perform the collection, approximately similar to an operation by human fingers. In the move toward autonomy, a stereo camera has been added to the ornithopter to provide visual feedback for the stem, which reports the position of the cutting and grasping. The position of the stem is detected by a stereo vision processing system and the inverse kinematics of the dual-arm commands both gripper and scissors to the right position. Those trajectories are smooth and avoid any damage to the actuators. The real-time execution of the vision algorithm takes place in the lightweight main processor of the ornithopter which sends the estimated stem localization to a microcontroller board that controls the arms. The experimental results both indoors and outdoors confirmed the feasibility of this sampling method. The operation of the dual-arm manipulator is done after the perching of the system on a stem. The topic of perching has been presented in previous works and here we focus on the sampling procedure and vision/manipulator design. The flight experimentation also approves the weight of the dual-arm system for installation on the flapping-wing flying robot.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

The loss of green space on the planet motivates researchers to focus more on the equipment for monitoring/sampling from the trees, farms, etc. Robotics offers a variety of reliable solutions for sampling and monitoring [Reference Atefi, Ge, Pitla and Schnable1]. The monitoring could be categorized into two approaches: (1) vision-based remote sensing monitoring and (2) intervention. The first category uses a computer vision algorithm for the detection of the plants, trees, etc. to categorize them or alarm the farmer when there is a defect in the plant, that is, leaf and plant segmentation in large-scale fields for agriculture purposes [Reference Weyler, Quakernack, Lottes, Behley and Stachniss2]. Bellocchio et al. used an unmanned aerial vehicle (UAV) and vision cameras to estimate the yield index of a tree in agriculture [Reference Bellocchio, Crocetti, Costante, Fravolini and Valigi3]. An urban farming system also accelerated the growth of lettuce using precision irrigation, fertigation, and weed control [Reference Moraitis, Vaiopoulos and Balafoutis4]. For the second category, an adaptable solution using ground and aerial robotics was proposed to monitor and sample from the crop, among others [Reference Pretto, Aravecchia, Burgard, Chebrolu, Dornhege, Falck, Fleckenstein, Fontenla, Khanna, Liebisch, Lottes, Milioto, Nardi, Nardi, Pfeifer, Popović, Potena, Pradalier, Rothacker-Feder, Sa, Schaefer, Siegwart, Stachniss, Walter, Winterhalter, Wu and Nieto5].

Figure 1. The integrated system, manipulator, and camera attached in front of the flapping-wing flying robot, Arduino board in communication with NanoPI main processor. The leg of the bird holds the bird steadily on a branch for sampling.

Figure 2. The global picture of the flapping-wing robot operation for perching and manipulation. The launching, controlled flight, and perching phases are presented in ref. [Reference Zufferey, Tormo-Barbero, Feliu-Talegón, Nekoo, Acosta and Ollero8], and stabilization after perching is covered in ref. [Reference Luque, Satue, Nekoo, Acosta and Ollero11].

The focus of this work is on an aerial robot capable of intervention in sampling leaves or parts of a plant (see Fig. 1). Multirotor UAVs were employed for this task, as presented in the literature; however, by ornithopters, there is no report on sampling or intervention so far except for ref. [Reference Nekoo, Feliu-Talegon, Acosta and Ollero6]. The global picture for the operation of the flapping-wing robot includes several phases: launching, controlled flight, perching, stabilization of bird after perching, manipulation, and finally, take-off again (see Fig. 2). The manipulation has been investigated in detail in the current work, and proof of flight capability has been presented briefly since that is a critical point. Launch and controlled flight have been studied in previous works [Reference Nekoo, Feliu-Talegon, Acosta and Ollero6, Reference Zufferey, Tormo-Barbero, Guzmán, Maldonado, Sanchez-Laulhe, Grau, Pérez, Acosta and Ollero7] and perching in ref. [Reference Zufferey, Tormo-Barbero, Feliu-Talegón, Nekoo, Acosta and Ollero8]. Stabilization after perching was also covered for this particular concept in literature [Reference Feliu-Talegon, Acosta, Suarez and Ollero9, Reference Feliu-Talegon, Acosta and Ollero10]. A special case is the motion of the flapping-wing robot in a branch after perching that needs modification of the body of the robot for gaining a better position and also manipulation after that [Reference Luque, Satue, Nekoo, Acosta and Ollero11]. Luque et al. presented body control after perching for robotics birds using closed-loop feedback controllers and optimal feedback linearization to control the servo-actuated leg. The feedback of the control loop was provided by the motion capture system as an external tracking system. Moreover, ref. [Reference Feliu-Talegon, Nekoo, Suarez, Acosta and Ollero12] studies the stabilization control problem of flapping-wing robots just before a take-off phase from a branch. At this stage, the claw of the robot grasps the branch with enough friction to hold the system steady in a stationary condition while performing manipulation. Before the take-off, the claw opens itself and the friction between the claw and branch vanishes; therefore, a precise control system is needed to keep the equilibrium and perform the take-off in a proper posture. To keep this current work focused and centralized on dual-arm manipulation with a flapping wing, the perching and stabilization are not considered in this work; however, to show that the robot is capable of performing flight with the designed manipulator, the controlled flight is briefly studied. An example of intervention by multirotor UAVs is the fumigation of a field by the installation of four nuzzles under an aerial system [Reference Zhu, Li, Adam, Li and Tian13]. A water sampling device was also investigated and experimented with using aerial robotics [Reference Koparan and Koc14]. Multirotor UAVs have a limited payload, and all the add-ons for sampling or monitoring applications must be lightweight. The same concept exists in flapping-wing flying robots designed for sampling purposes.

The fruit harvesting by robotics technology is a similar topic to the sampling procedure, presented in this work. An interesting cutting tool with a rotary blade and gripper was designed and built for tomato harvesting using an industrial manipulator [Reference Jun, Seol and Son15]. The manipulator was mounted on a wheeled mobile robot to provide access to the harvesting field and using the ground robotics for the task, the weight limit was not so critical for the design [Reference Park, Seol, Pak, Jo, Jun and Son16]. To avoid damaging the tomato, the gripper cutting design was changed with a soft robotics end-effector though the cutting mechanism was reported successful as well [Reference Jun, Kim, Seol, Kim and Son17]. One motor and a mechanical mechanism were designed to guide the tomato inside the cutting area for the scissors. Grippers for surgery were also presented with delicate scissors and grippers for medical intervention though they were installed on stationary setups without limitation of weight [Reference Thielmann, Seibold, Haslinger, Passig, Bahls, Jörg, Nickl, Nothhelfer, Hagn and Hirzinger18Reference Jin, Feng, Han, Zhao, Cao and Zhang20]. The size of the scissors forces them to be significantly small; however, the precise force generation and measurement increased the weight of the system [Reference Tavakoli, Patel and Moallem21]. Lee et al. presented a hydraulic cutting end-effector for big industrial manipulators that worked similarly to the scissors mechanism [Reference Lee, Moon, Hwang, Shin, Baek, Sun, Ryu and Han22]. Considering the literature in different domains of robotics, a scissors-type dual-arm manipulator has not been found for the best knowledge of the authors.

The lightweight robotic manipulator design was a strict hard line in this work. This has been a hot topic in the literature; however, the term “lightweight” could be interpreted differently on different scales. In the following, a list of lightweight designs is presented in different scales such as refs. [Reference Morikawa, Senoo, Namiki and Ishikawa23Reference Armengol, Suarez, Heredia and Ollero25]. Bellocchio et al. designed and developed a five-degree-of-freedom (DoF) aerial manipulator with 250 g weight and 200 g payload capacity [Reference Bellicoso, Buonocore, Lippiello and Siciliano26]. The arm was installed under the multirotor UAV for manipulation tasks. Barrett et al. designed a 10 kg lightweight manipulator (including the gripper) for a mobile robot [Reference Barrett, Hoffman, Baccelliere and Tsagarakis27]. Imanberdiyev et al. proposed a 12-DoF dual-arm manipulator with a mass of 2.5 and 1 kg payload capacity for aerial manipulation [Reference Imanberdiyev, Sood, Kircali and Kayacan28]. Suarez et al. developed a 3.4 kg long-reach manipulator with a payload capacity of 2.5 kg for inspection tasks [Reference Suárez, Sanchez-Cuevas, Fernandez, Perez, Heredia and Ollero29].

The weight and payload capacity of ornithopters are far below the capacity of multirotor systems. Flapping-wing technologies are also less developed in comparison with UAVs. Most parts of the ornithopters must be built in the laboratories and that imposes more difficulty on maintenance, repair, design, and manufacturing. The weight and payload of the E-Flap robot were reported 520 and 500 g, respectively [Reference Zufferey, Tormo-Barbero, Guzmán, Maldonado, Sanchez-Laulhe, Grau, Pérez, Acosta and Ollero7]. So, a part of 500 g could be devoted to the manipulator which is lower than the UAVs. Soft robotics also offers very lightweight systems such as a gripper, 340 g for handling objects [Reference Low, Lee, Khin, Thakor, Kukreja, Ren and Yeow30]. In this current work, the limit of the mass is 100 g in the design for the manipulator to be installed on the E-Flap. The 100 g limitation choice, one of the objectives of the GRIFFIN Advanced Grant projectFootnote 1 , is to develop a system for monitoring and also safe interaction with the environment. The 400 g rest of the payload will be used later by the onboard event camera, RGB camera for recording high-quality images, and onboard computers for image processing, which leaves us a 100 g limitation for the manipulator. It must be clarified that the camera for detection and the camera holder are not a part of the manipulator in terms of weight distribution. Consequently, the proposed solution is a dual-arm cooperative manipulator, one arm for scissors and the other for a gripper, altogether less than 100 g.

Ornithopters’ payload constraints are also critical for the onboard processing hardware and hence, their computational capacities. Multicopters can be equipped with many different sensors and can execute onboard intensive processing perception methods to enable advanced autonomous functionalities such as GNSS-denied navigation, robust and accurate localization, or perception-based manipulation capabilities, among others, see, for example, refs. [Reference Blösch, Weiss, Scaramuzza and Siegwart31, Reference Thomas, Loianno, Sreenath and Kumar32]. Conversely, ornithopters’ strict payload and energy limitations severely constrain the sensors and computing or additional hardware that can be installed on board. This has been addressed in the proposed sampling robot in two ways. First, it includes a lightweight stereo vision system as the main sensor – outdoor stereo systems provide significantly higher robustness to lighting conditions than RGB-D cameras. Second, the algorithms for processing the stereo images have been carefully designed to enable real-time execution in constrained resource hardware. They are processed onboard in a low-cost lightweight single-board computer, which was already integrated into the ornithopter, hence involving no additional weight.

Very few computer vision algorithms for ornithopters have been reported for execution onboard. An obstacle avoidance method for flapping-wing micro aerial vehicles using stereo vision was reported [Reference Tijmons, De Croon, Remes, De Wagter and Mulder33]. In this paper, no significant vibration level is assumed as sampling is performed once the robot has perched. The stereo vision system is used to detect the 3D position of a stem. Then, the cutting point is computed and used as a reference to automatize manipulation and sampling tasks.

The presented work in this paper is inspired by ref. [Reference Nekoo, Feliu-Talegon, Acosta and Ollero6], which described a sampling scheme with a single arm and a gripper without vision detection (79.7 g); hence, the position of the leaf must have been provided by the user. In this work, we concentrated on usability and added two main novelties to automatize sampling and improve performance: (1) a dual-arm manipulator with a scissors-type arm and a gripper and (2) a vision detection system that provides the position of the stem to the arm for sampling (Fig. 1). The dual-arm system proposed in this paper has a mass of 94.1 g (94.1 g is the mass of the dual-arm manipulator and its structure; the camera, 30 g, and its holder are not a part of dual-arm weight distribution, see Fig. A.1 in Appendix for more details). In terms of weight, moving from one to two arms and also adding a scissors-type manipulator added only 14.4 g to the system in comparison with ref. [Reference Nekoo, Feliu-Talegon, Acosta and Ollero6].

The rest of the paper is organized as follows. Section 2 describes the mechanical design of the manipulator, the kinematics and configuration of the system, and the electronics. Section 3 presents the vision system and processing to automatize the sampling process. The integrated system is described in Section 4, and the experimental results are reported in Section 5. Concluding remarks are presented in Section 6, and a list of works for future study is presented in Section 7.

2. System description

2.1. Mechanical design

The design delivers a dual-arm manipulator, each arm endowed with two DoF in a planar configuration. The weight of any additional part of the ornithopter must be kept at a minimum level to save the flight capability of the robot. The E-Flap mass is 520 g with a payload capacity of almost its weight [Reference Zufferey, Tormo-Barbero, Guzmán, Maldonado, Sanchez-Laulhe, Grau, Pérez, Acosta and Ollero7]. A limit of 100 g was considered for the manipulator design of the flapping-wing robot; the rest of the 400 g of the payload will be used for adding an event camera or RGB ones and an onboard computer for image and video processing in future studies. Based on the weight limit, the preferred material for linkage design and the structure is carbon fiber plates. Servomotors were selected as actuators of the system, a 7.8 g MKS HV75K type with operating voltage between 3.7 and 8.2 V. This type is a coreless motor with a metal gearbox. Here, in this work, a 5V power supply is considered for the servomotors that produce 1.9 kgcm torque. The left arm has end-effector scissors and the right arm a gripper (see Fig. 3). The design is almost symmetric, except for the second links of the left and right arm. The gripper is a 3D-printed part and the scissors include a carbon fiber fixed jaw and a blade. To avoid any collisions between the two arms, the blade was designed in a plane, 5 mm lower than the gripper. In this case, when the scissors cut a sample, the gripper will hold that and it does not fall. The overall weight of the system including the arms, servomotors, Arduino board, and voltage regulators was 94.1 g (see Appendix). The other part of the jaw of the cutter holds the stem against the blade during the cutting process. It should be a relatively hard material and in this case, a carbon fiber plate has been chosen. The other reason for selecting carbon fiber for this part was its lightweight characteristics and the possibility of CNC cutting with a desirable shape. Putting a blade on the opposite side could be also an option to generate cutting force from both sides though it could increase the weight of the system. In the case of insufficient cutting force, this design could be considered as well, but for this work and the selected stems, the scissors manipulator handled the cutting process successfully.

Figure 3. The CAD design of the system, that is, ML1 shows motor left 1 (94.1 g is the mass of the dual-arm manipulator and its structure; the camera, 30 g, and its holder are not a part of dual-arm weight distribution, see Fig. A.1 in Appendix for more details).

2.2. Kinematics and configuration

This section presents the equations of the kinematics for the developed dual-arm cooperative manipulator. Let us use the sub-index $k=\textrm{r, l}$ to denote the right arm $(k=\textrm{r})$ and the left arm $(k=\textrm{l})$ . Let $\theta _{k,1}$ and $\theta _{k,2}$ (rad) be the angular positions of the left arm and the right arm of the manipulator (see Fig. 4). The lengths of the links are $l_{k,1}$ , $l_{k,2}$ (m). The relations between the base of each arm and its end-effector can be described as:

Figure 4. The kinematics and axes definition of the dual-arm manipulator.

(1) \begin{equation} \left [ \begin{array}{cllll} \Delta X_{k,\textrm{t}} \\[5pt] \Delta Y_{k,\textrm{t}} \end{array} \right ] = \left [ \begin{array}{cllll} l_{k,1} \cos{\!(\theta _{k,1})} + l_{k,2} \cos{\!(\theta _{k,1}+\theta _{k,2})} \\[5pt] l_{k,1} \sin{\!(\theta _{k,1})} + l_{k,2} \sin{\!(\theta _{k,1}+\theta _{k,2})} \end{array} \right ]. \end{equation}

The inverse kinematics of the system can be obtained from the desired Cartesian positions. The inverse kinematics of a two-link arm has two solutions for achieving the desired point which leads us to two different configurations: (1) elbow-up and (2) elbow-down configurations. We will use the elbow-up configuration for the left arm and the elbow-down configuration for the right one (see Fig. 5). The choice of the different configurations for each arm is motivated by the avoidance of collision between the arms, achieving the desired path that satisfies the motion of the arms and decreases the trajectory, reducing time interval, and satisfying the kinematics constraints.

Figure 5. Different configuration of the manipulators.

By squaring and adding the components of (1), we get the coordinate $\theta _{k,2}$ for both configurations:

(2) \begin{equation} \theta _{k,2} = \pm \arccos\!{\left (\frac{\Delta X_{k,\textrm{t}}^2+\Delta Y_{k,\textrm{t}}^2-l_{k,1}^2-l_{k,2}^2}{2l_{k,1}l_{k,2}}\right )}, \end{equation}

The coordinate $\theta _{k,1}$ can be expressed as $\theta _{k,1}=\gamma \mp \beta$ , and using trigonometric relations, it yields:

(3) \begin{equation} \theta _{k,1} = \arctan\!{\left (\frac{\Delta Y_{k,\textrm{t}}}{\Delta X_{k,\textrm{t}}}\right )} \mp \arctan\!{\left (\frac{l_{k,2} \sin{\!(\theta _{k,2})}}{l_{k,1}+l_{k,2}\cos{\!(\theta _{k,2})}}\right )}. \end{equation}

The main advantage of this approach is to recapture both the elbow-up and elbow-down solutions by choosing the negative and positive signs in Eqs. (2) and (3), respectively.

Let us consider the base of the right arm as the reference frame for the desired coordinates ( $x_{\textrm{t}}$ , $y_{\textrm{t}}$ ). Then, we see that to reach that point with both arms, we have $x_{\textrm{t}}=\Delta X_{\textrm{l},\textrm{t}}=\Delta X_{\textrm{r},\textrm{t}}+b$ and $y_{\textrm{t}}=\Delta Y_{\textrm{l},\textrm{t}}=\Delta Y_{\textrm{r},\textrm{t}}$ .

The control of the dual-arm system was simplified by the use of servomotors which have internal stable proportional integral derivative (PID) controllers; therefore, the dynamics of the cooperative system was not needed to be considered in this work.

2.3. Electronics and communication

The electronic system consists of an eCapture G53 stereo camera, a NanoPI NEO with a Quad-core A7 processor, an Arduino Nano ATM328P microcontroller, and a dual-arm manipulator. The stereo camera was chosen for its low size $(50\times 14.9\times 20\textrm{mm})$ and weight (30 g). It provides a pair of $640\times 400$ images at 30 frames per second (fps). No scaling was performed during image processing. The camera was set such that the workspace of the dual-arm manipulator is within the field of view of each camera of the stereo system, and hence it is possible to obtain 3D measurements of the objects and samples being manipulated. The NanoPI NEO is a lightweight compact computer that was already integrated into the E-Flap ornithopter [Reference Zufferey, Tormo-Barbero, Guzmán, Maldonado, Sanchez-Laulhe, Grau, Pérez, Acosta and Ollero7] as the main computational unit, and hence its use involves no additional weight. In the proposed sampling system, it executes the processing of the stereo images (see Section 3) and sends the 3D coordinates of the stem cutting point to the Arduino Nano through an Inter-Integrated Circuit (I2C) bus. The Arduino Nano was chosen to implement the inverse kinematics of the dual-arm manipulator and for being able to generate six independent pulse width modulation (PWM) signals for direct control of the dual-arm digital servos. The stereo camera can provide up to 30 fps. At a rate of 15 fps, the vision system processes the stereo images, computes the 3D coordinates of the stem cutting point, and sends them to the Arduino Nano microprocessor, which computes and provides the PWM signals for direct control of the dual-arm digital servos.

3. Onboard vision system

The onboard vision system automatically detects the plant stem, estimates its 3D position, and determines $p_{\textrm{t}}$ , the 3D target point where to be cut for sampling. The perception system uses a lightweight stereo camera. In contrast to RGB-D cameras, stereo cameras enable significant robustness against the high diversity and widely changing illumination conditions that can be found in the envisioned outdoor scenarios. Besides robustness, accuracy in the estimation of the 3D position of the stem and the stem cutting point are critical requirements. Efficient computation is another critical constraint. The strict ornithopter payload constraints impose limitations on the onboard vision sensor and the processing hardware and hence on the computational burden of the vision processing techniques. First, we used an ultra-lightweight stereo vision camera, which only weighs 29.4g ( $\sim$ 30 g), and the main ornithopter board (a NanoPI NEO) for onboard computation, so that the onboard execution of the vision algorithms involves no additional payload. The stereo vision processing algorithms were carefully designed and implemented to enable efficient execution in the adopted constrained resource NanoPI NEO processing unit.

The developed vision processing scheme has the following main stages: (1) removal of the background using depth information, (2) segmentation of the plant stem using edge detection and Hough transform, (3) 3D localization of the stem, and finally (4) determination of the stem cutting 3D point $p_{\textrm{t}}$ , the output of the visual detection scheme. Figure 6 shows the developed vision scheme with its main processing stages. The results from the execution of the different stages in one experiment are shown in Fig. 7.

Figure 6. General scheme of the onboard vision processing system.

Figure 7. Execution of the stages of the vision processing scheme in one example: (a) input left and right images $\{I_{\textrm{L}}, I_{\textrm{R}}\}$ from the stereo vision system; (b) computed disparity map $D$ showing the disparity values with different colors; (c) resulting pixels $F$ after background removal; (d) vertical lines $\Lambda$ detected; and (e) reprojection on $I_{\textrm{L}}$ of the reconstructed 3D line $\lambda$ and cutting point $p_{\textrm{t}}$ (shown with a $\times$ ).

The operation of the stereo processing scheme is as follows. The input is $\{I_L, I_R\}$ , the two rectified undistorted images resulting from the calibrated stereo pair (Fig. 7(a). First, the stereo disparity map $D$ is computed using the block matching method from ref. [Reference Konolige34] (Fig. 7(b)). Next, a background removal processing stage is performed. The sampling system is designed to collect the closest sample to the dual-arm manipulator and hence to the stereo camera. The closer an object is to the camera, the higher its disparity. The background is removed by a simple thresholding method that selects the pixels where the disparity value is higher than threshold $\tau ^\star$ , $F^\prime = \{(u,v) \in D \mid D(u,v) \gt \tau ^\star \}$ . We adopt an optimal thresholding method to select the value of $\tau ^\star$ . Assume that the histograms of the disparity image can be modeled as the sum of two distributions, $p_0(\tau )$ and $p_1(\tau )$ , corresponding to each class $\omega _0$ and $\omega _1$ , that is, $h(\tau )=P_0 p_0(\tau )+ P_1 p_1(\tau )$ , where $P_0$ and $P_1$ are the a priori probabilities of $\omega _0$ and $\omega _1$ . In our case, $\omega _1$ is the class corresponding to the higher values of the disparity, which are originated by the stem of interest, the nearest to the camera, while $\omega _0$ corresponds to the background (see Fig. 8). This approach makes threshold selection agnostic to the scene background. Optimal thresholding selects $\tau ^\star$ as the disparity value $\tau$ that minimizes the probability of erroneously classifying pixels:

Figure 8. Disparity histogram in one example and approximation with two Gaussian distributions. The lower distribution is background, and the higher the stem of interest.

(4) \begin{equation} E(\tau )= P_1 \int _{\tau }^{1} p_0(z) dz + P_0 \int _{1}^{\tau } p_1(z) dz. \end{equation}

Assuming that both modes of the histogram $p_0$ and $p_1$ can be modeled as Gaussian distributions, using the method described in ref. [Reference Chang, Fan and Chang35], $\tau ^\star$ can be simply computed as follows:

(5) \begin{equation} A(\tau ^\star )^2+B\tau ^\star +C=0, \end{equation}

where

\begin{align*} A&=\sigma _0^2-\sigma _1^2,\\[5pt] B&=2\mu _0\sigma _1^2-2\mu _1\sigma _0^2,\\[5pt] C&=\sigma _0^2\mu _1^2-\sigma _1^2\mu _0^2+2\sigma _0^2\sigma _1^2\log\!(P_1\sigma _0/P_0\sigma _1), \end{align*}

where $\mu _0$ and $\sigma _0$ represent the mean and variance of $p_0(\tau )$ , and $\mu _1$ and $\sigma _1$ , those of $p_1(\tau )$ . These parameters are computed by approximating the histogram of the disparity map with a sum of two Gaussians using the described method in ref. [Reference Chang, Fan and Chang35]. After thresholding, a binary opening operation $F = F^\prime \circ S$ is applied to reduce noise and thresholding inaccuracies. The result is $F$ , the set of pixels of $I_L$ containing the stem of interest (Fig. 7(c)).

After background removal, the stem of interest is segmented. First, a Canny edge detector is applied on $I_{\textrm{L}}$ only on the pixels within $F$ , that is, not processing the pixels assigned as background to save computational cost. The lines $L$ in the resulting edge image are detected using the Hough transform. Hough transform is a widely used algorithm for the detection of parametric curves. This method was selected due to its robustness against gaps in curves and noise. The stem of interest is assumed to be represented by lines with a strong vertical component. The vertical lines detected by the Hough transform are selected. This operation is very efficient since each line $l_i$ in the Hough space is represented in polar coordinates as $(\rho _i, \theta _i)$ , and $l_i$ is considered vertical if it satisfies $|\theta _i|\lt \theta _v$ , where $\theta _v$ is the threshold for angle. Hence, the line $\lambda$ that defines the stem of interest is computed by averaging the selected vertical lines:

(6) \begin{equation} \frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\rho } = u \cos{\!\left(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta }\right)} + v \sin{\!\left(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta }\right)}, \end{equation}

where $\Lambda$ is the set of vertical lines defined as $\Lambda = \{(\rho, \theta ) \in L \mid \theta _v \gt |\theta | \}$ and $N$ is the number of selected vertical lines $N = \textrm{card}(\Lambda )$ . The result of this processing stage is illustrated in Fig. 7(d). In the experiments, $\theta _v$ was taken as $\theta _v=\frac{1}{3} \textrm{(rad)}$ meaning that non-vertical stems were ignored. However, $\theta _v$ enables setting the method to perceive stems of different inclinations. The method is robust against partial occlusions since lines can be detected from different unconnected segments of the stem. Additionally, it is also robust in cases with multiple stems. First, the background remover filters out those stems that are out of the manipulator workspace. In addition, if after computing the Hough transform, several modes are detected on the Hough space, it means that several stems are present in the manipulator workspace. In that case, the method selects the strongest mode in the Hough transform, meaning that the stem with the best visibility on $I_L$ is detected. Next, the line $\lambda$ that defines the stem of interest in image $I_{\textrm{L}}$ is reprojected on 3D world coordinates. Let $\{(c_{u,\textrm{L}},c_{v,\textrm{L}}), (c_{u.\textrm{R}},c_{v,\textrm{R}})\}$ be the principal point of the left and right cameras of the stereo pair, respectively, $b$ the stereo baseline length, and $f$ the cameras focal length. Since $D(u,v)$ is known, $\forall (u,v) \in \lambda$ , $\lambda$ can be reprojected as follows:

(7) \begin{equation} \left \{ \begin{array}{l@{\quad}c@{\quad}c} x = \dfrac{\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\rho } - \alpha \sin{\!\big(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta }\big)}}{d(\alpha ) \cos{\!\big(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta }\big)}} - \dfrac{c_{u,\textrm{L}}}{d(\alpha )}\\[10pt] y = \dfrac{\alpha - c_{v,\textrm{L}}}{d(\alpha )}\\[10pt] z = \dfrac{f}{d(\alpha )} \end{array} \right ., \alpha \in \mathbb{R}, \end{equation}

where

(8) \begin{equation} d(\alpha ) = \frac{c_{u,\textrm{L}} - c_{u,\textrm{R}}}{b} - \frac{1}{b} D\left (\frac{\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\rho } - \alpha \sin{\!\big(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta }\big)}}{\cos{\!(\frac{1}{N}\sum _{(\rho, \theta ) \in \Lambda }{\theta })}} - c_{u,\textrm{L}}, \alpha \right ). \end{equation}

In Eq. (7), $(x,y,z)$ represents the 3D coordinates of the points of the line originated by the stem of interest. Finally, the method determines the 3D coordinates of the point for cutting the sample. Assuming that the transformation matrix between reference frames of the camera {C} and manipulator {M} is known (see Fig. 3), the stem cutting point $p_{\textrm{t}} = [x_{\textrm{t}}, y_{\textrm{t}}, 0]^\top$ is determined as the intersection between $\lambda$ expressed in the manipulator frame {M} and plane $z = 0$ , see Fig. 7(e) which shows $p_{\textrm{t}}$ with a $\times$ -sign for visualization.

The accuracy evaluation of the adopted vision method in a preliminary experiment is presented in Fig. 9. In the experiment, the stem position was estimated (blue line in Fig. 9 top) while it described a 5 cm side square in plane $XZ$ w.r.t. $\{\textrm{C}\}$ (red line in Fig. 9 top). The distribution of the error, see Fig. 9 bottom, evidenced the localization accuracy of the vision system, showing a mean error of $\mu =-0.025$ mm and a standard deviation of $\sigma =-0.615$ mm. The method enables computing the 3D coordinates of the cutting point in real time on the adopted resource-constrained onboard hardware, updating the cutting point 3D coordinates online for every input pair of images $\{I_{\textrm{L}}, I_{\textrm{R}}\}$ gathered by the stereo system. Finally, $p_{\textrm{t}}$ , the 3D coordinates of the cutting point, are sent to the Arduino Nano to enable performing the sampling mission through commanding the dual-arm motion as presented in Section 2.2.

Figure 9. Accuracy evaluation of the vision-based detection method: top) estimated position (blue) when the stem described a 5 cm side squared trajectory (red); and bottom) error histogram (blue) and its corresponding probability distribution fitting (red). The position is defined w.r.t. the camera frame {C}.

4. Integrated system

The main difficulty in the design of flapping-wing robots is adding equipment to the flying platform due to the very limited payload capacity. While multirotor UAVs are stable flying systems capable of stationary flight and can carry add-ons and fulfill complex tasks, in flapping-wing systems the flight is always performed in continuous forward flight. Perching is mandatory for conducting the contact inspection or sampling. Therefore, a leg, a perching mechanism, a manipulator, and a camera are needed to have an integrated system. The process of different parts and add-ons should be done in one code and processor to avoid having multiple computers onboard. Here, in this work, the main processor of the robot, a NanoPI board, was used for flight and vision-based processing. To derive the dual-arm servos with PWM signals, we used an Arduino Nano board (equipped with 6 PWM outputs). Using the Arduino Nano does not involve additional weight, since it weighs 7 g, which is equal to or even lower than the weight of common PWM modules. Additionally, I2C is used for the communication between the NanoPI and the Arduino Nano. The whole integrated system on the flapping-wing robot is presented in Fig. 1.

5. Experimental results

The experiments were performed with E-Flap flapping-wing robot [Reference Zufferey, Tormo-Barbero, Guzmán, Maldonado, Sanchez-Laulhe, Grau, Pérez, Acosta and Ollero7]. Two types of experiments are presented in the next sections: flight experiments and indoor and outdoor experiments.

5.1. Flight experiment

The 100 g weight limit of the proposed sampling system, which is imposed on the design and manufacturing, comes from the ornithopter flight capability. The center of mass of the flapping-wing robot is placed under the wing to increase maneuverability and flight stability. A lumped mass at the tip of the bird acts against the flight stability. The design of the manipulator and the position of the camera have been done in a way to reduce this effect to the minimum level. The camera has been placed 125 mm behind the base of the manipulator. The configuration of the manipulator is also important during the flight phase, see Fig. 10, which shows the left and right arms were spread backward toward the center of mass of the robot. The flight is done in a 20 m $\times$ 15 m $\times$ 7 m testbed that provides precise position feedback by an Opti-Track motion capture system including 28 cameras. The robot is launched by a launcher system that generates a 4 m/s initial speed for the robot. Immediately after releasing from the launcher, the robot starts flapping and tries to regulate itself to the set-point of height, 1.75 m for this experiment. The flapping frequency is one of the parameters for the regulation of height, which was limited by 85% of the full flapping power. The robot flies almost 12 m and then it reaches the end of the diagonal distance of the Opti-Track testbed and land on the safety net. The results of regulation in the $Z$ axis are reported in Fig. 11(a). The velocity in the $Z$ axis is also presented in Fig. 11(b). The 3D trajectory of the flapping-wing flying robot is plotted in Fig. 12, showing an error of 29 cm. The purpose of this experiment was to demonstrate that the robot is capable of flying with the designed dual-arm cooperative manipulator. The snapshots of flight in the indoor test bed are illustrated in Fig. 13. The robot needs to perch before the manipulation task which was studied in different research work (please visit ref. [Reference Zufferey, Tormo-Barbero, Feliu-Talegón, Nekoo, Acosta and Ollero8]) and that is not in the scope of this paper. The video of the flight is presented as supplementary material for the article on the journal website.

Figure 10. The folded configuration of the dual-arm for shifting backward the center of mass of the robot.

Figure 11. The $Z$ axis (a) position of the robot in forward flight, reference 1.75 m, error 29 cm, and (b) velocity of the robot.

Figure 12. The 3D trajectory of the flight of the robot, carrying the dual-arm manipulator on top.

Figure 13. The snapshots of flight with the manipulator and camera in the indoor test bed. The white-wing E-Flap prototype has been used for the flight since the wing chamber of this robot bird provides more lift force for load-carrying capacity.

5.2. Indoor and outdoor tests

Experiments considering that the ornithopter has perched have been performed with the proposed dual-arm manipulator. These experiments demonstrate the possibility of performing manipulation tasks using an onboard vision system. It has been demonstrated that all the parts of the integrated system work satisfactorily as specified. The temporal cost of processing the images from the stereo pair was computed during the experiments (i.e., implemented on the low-resource NanoPI board), obtaining $(23.31 \pm 4.24)$ ms per pair of images. This temporal cost involves a processing rate of $\sim 40$ Hz, which is $\gt 1.33$ times the camera frame rate, set to 30 fps, ensuring real-time stem 3D estimation. In addition, the measured temporal cost includes the time devoted to data logging. Indoor and outdoor experiments were performed to demonstrate the reliability of the system under different conditions of background, light, and environment (see Fig. 14 for the snapshot of the outdoor experiment). The indoor experiments have been done as a first step toward completing the task. During the experiments, an external computer is remotely connected to the NanoPI to ensure access to all the data and verify that every part works. Finally, the proposed sampling system was validated in outdoor experiments that reproduce the final application of our scheme. The reference trajectories for the dual-arm manipulator have been designed using second-order polynomial curves. These soft references avoid moving the servomotors abruptly and allow us not only to reach the desired point ( $x_{\textrm{t}}, y_{\textrm{t}}$ ) but also to control the speed and the maximum acceleration of the movement. These soft movements prevent the gears from being damaged and allow the robot to have better interaction with the environment. The references are selected to move from the initial position to the final position in 1 second. Figure 15 shows the commanded angular positions of all joints of the dual-arm manipulator for three different experiments, two of them were performed indoors (blue and orange lines) and one outdoors (purple line). In the first experiment, the position of the stem was $x_{\textrm{t}}=3\,\textrm{cm}$ and $y_{\textrm{t}}=8\,\textrm{cm}$ and in the second, the position was $x_{\textrm{t}}=-1\,\textrm{cm}$ and $y_{\textrm{t}}=6\,\textrm{cm}$ . The indoor experiments were performed with a white background and artificial light to verify that the integrated system worked well under ideal conditions. Sets of outdoor experiments were also performed to verify that the integrated system was robust to lighting, background, and environment, among others. In the outdoor experiment shown in Fig. 14, the stem is detected at position $x_{\textrm{t}}=7\,\textrm{cm}$ and $y_{\textrm{t}}=6\,\textrm{cm}$ . The sequence of the complete task is described as follows: (1) at $t=0$ (s) the vision system starts to detect the stem, (2) at $t=1.5$  (s), the stem is detected and the right arm (gripper) moves to $p_{\textrm{t}}$ , the position of the stem, (3) at $t=3.5$  (s), the gripper closes and grasps the stem, (4) at $t=5.5$ (s), the left arm (scissors) moves to $p_{\textrm{t}}$ , (5) at $t=7.5$ (s), the scissors cut the stem, (6) at $t=8.5$ (s), the left arm (scissors) moves to the initial position, and (7) at $t=9.5$ (s), the right arm (gripper) moves to the initial position taking the stem with it. Notice that the angular coordinates $\theta _{3}$ for both arms are equal in the three experiments because they represent respectively the actuation of the gripper and the scissors.

Figure 14. The snapshots of the results of the experiments with timestamps: (a) shows the detection of the stem, (b) shows the motion of the right arm towards the stem, (c) depicts the gripper action, (d) shows how the scissors move toward the stem, (e) and (f) demonstrate the cutting and opening of the scissors, (g) shows that scissors move to the left, and (h) shows that the gripper moves the sample to the right.

Figure 15. Indoor and outdoor manipulation tests.

The diameters of the stems for cutting were approximately less than $1\,\textrm{mm}$ , and the specific stem in Fig. 14 was $\approx 0.85\,\textrm{mm}$ . It should be noted that the stems for cutting were not dry and the selection of dry samples would change the cutting stage to breaking one. For samples with a stem diameter of more than $1\,\textrm{mm}$ , the process could not be performed with repeatability that in other words indicates the limit. Therefore, the presented stem of the sample in Fig. 14 looks tiny and soft. In order to increase the power of scissors and the diameter of the stem for cutting, the servomotor should be stronger. The current servomotor of the scissors is 7.8 g MKS HV75K, with 1.9 kgcm torque at 5 V. This torque provides a range of applied cutting force between $F_{\textrm{min}}=0.186/d_{\textrm{max}}(\textrm{N})$ and $F_{\textrm{max}}=0.186/d_{\textrm{min}}(\textrm{N})$ in which $d_{\textrm{max,min}}=0.052,0.028(\textrm{m})$ are the maximum and minimum distance of the edge of the blade with respect to the rotation axis of the servo motor, $F\in [3.57,6.64](\textrm{N})$ . It should be noted that $d_{\textrm{max,min}}(\textrm{m})$ should be the perpendicular distance which, in this work due to the design of the scissors, is assumed perpendicular and computation of exact cutting force of scissors requires more extensive study [Reference Mahvash, Voo, Kim, Jeung, Wainer and Okamura36, Reference Yang, Xiong, Zhang, Yang, Huang, Zhou, Liu, Su, Chui and Teo37]. In conclusion, to increase the cutting force and consequently the diameter of the samples, the servomotor of the scissors should be changed which must be within the range of the load-carrying capacity of the flapping-wing robot. The reaction force of the cutting after the separation of the stem might deviate the robot on the branch from a stable position. With the current servomotor and samples $\lt 1(\textrm{mm}$ ), no extra reaction force has been observed.

These results show the efficacy of the proposed system for indoor and outdoor experiments. The video file of the experiment is available as supplementary material for this work on the journal website.

6. Conclusion

This work presented a lightweight dual-arm cooperative manipulator for the sampling of the leaves or parts of plants using an ornithopter and an onboard vision system. The weight limit of the flapping-wing system imposed a lightweight design approach; hence, the use of carbon fiber plates was necessary to reduce the total mass. The system gained a mass of 94.1 g. The previous version of the manipulator for the flapping-wing robot was a single two-DoF arm for plucking a leaf application. This work improved the application by using a dual-arm cooperative system for cutting the stem before applying force. The camera and vision system algorithm reported the position of the stem to the processor of the manipulator through I2C communication. It included a lightweight stereo camera (eCapture G53) installed behind the manipulator on top of the flapping-wing robot. An efficient onboard stereo vision processing system was developed to provide the real-world coordinates of the stem of interest and autonomously complete the sampling mission. The successful implementation and operation of the proposed plant sampling system were validated in sets of indoor and outdoor experiments. The flight experiment was performed to show the capability of the robot bird for carrying the additional payload of the manipulator and the camera. The main contribution of the work is extending the manipulator from a single arm to a dual arm in comparison with ref. [Reference Nekoo, Feliu-Talegon, Acosta and Ollero6] and adding a vision system for the detection of the target. The previous sampling mechanism worked with a plucking method that could damage a part of the sample; however, this mechanism holds the sample with one arm and cuts the stem with another arm without exerting unnecessary force on the sample.

7. Future study

This paper presented a focused study on dual-arm manipulation in a leaf-sampling case study. Some research lines were not covered in this work such as stabilization after perching and take-off after sampling. These topics require extensive study and research which is ongoing in the framework of the GRIFFIN project. The samples and stems were lightweight due to the limited power of the scissors so far. A more extensive analysis of the possible range for cutting samples and the power of scissors is suggested for future works. The additional weight of the samples also should be considered for the load-carrying capacity computation; this current work was not considered the take-off phase; hence, the topic will be researched in future works.

Author contribution

SRN and DFT designed, manufactured, and programed the manipulator. ASC designed and assembled the electronics. RT designed, programed, and coded the vision system. SRN, DFT, RT, and ASC tested the flight, manipulation, sampling, and vision tests and wrote initial draft. SRN, DFT, RT, ASC, JRMD, and AO performed statistical analyses, review, editing, and conceptualization. AO funding acquisition of GRIFFIN, resources, supervision. AO and JRMD funding acquisition of ROBMIND, resources, supervision.

Financial support

This work was supported by the European Project GRIFFIN ERC Advanced Grant 2017, Action 788247. Partial funding was obtained from the Spanish Projects ROBMIND (Ref. PDC2021-121524-I00) and the Plan Estatal de Investigación Científica y Técnica y de Innovación of the Ministerio de Universidades del Gobierno de España (FPU19/04692).

Competing interests

The authors declare no conflicts of interest exist.

Ethical approval

Not applicable.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/S0263574723000851.

Appendix

(A) The system, including a manipulator with processor and wiring, weighs 94.1 g, please see Fig. A.1.

Figure A.1. The dual-arm cooperative manipulator weight measurement, 94.1 g, without a camera.

(B) The exploded view of the designed dual-arm scissors manipulator of Fig. 4, presented in Fig. A.2.

Figure A.2. The exploded view of the designed dual-arm scissors manipulator.

References

Atefi, A., Ge, Y., Pitla, S. and Schnable, J., “Robotic technologies for high-throughput plant phenotyping: Contemporary reviews and future perspectives,” Front. Plant Sci. 12, 611940 (2021).10.3389/fpls.2021.611940CrossRefGoogle ScholarPubMed
Weyler, J., Quakernack, J., Lottes, P., Behley, J. and Stachniss, C., “Joint plant and leaf instance segmentation on field-scale UAV imagery,” IEEE Rob. Autom. Lett. 7(2), 37873794 (2022).10.1109/LRA.2022.3147462CrossRefGoogle Scholar
Bellocchio, E., Crocetti, F., Costante, G., Fravolini, M. L. and Valigi, P., “A novel vision-based weakly supervised framework for autonomous yield estimation in agricultural applications,” Eng. Appl. Artif. Intell. 109, 104615 (2022).CrossRefGoogle Scholar
Moraitis, M., Vaiopoulos, K. and Balafoutis, A. T., “Design and implementation of an urban farming robot,” Micromachines-BASEL 13(2), 250 (2022).CrossRefGoogle ScholarPubMed
Pretto, A., Aravecchia, S., Burgard, W., Chebrolu, N., Dornhege, C., Falck, T., Fleckenstein, F. V., Fontenla, A., , M. Imperoli, Khanna, R., Liebisch, F., Lottes, P., Milioto, A., Nardi, D., Nardi, S., Pfeifer, J., Popović, M., Potena, C., Pradalier, C., Rothacker-Feder, E., Sa, I., Schaefer, A., Siegwart, R., Stachniss, C., Walter, A., Winterhalter, W., Wu, X. and Nieto, J., "Building an aerial–ground robotics system for precision farming: An adaptable solution,” IEEE Rob. Autom. Mag. 28(3), 29–49 (2020).Google Scholar
Nekoo, S. R., Feliu-Talegon, D., Acosta, J. A. and Ollero, A., “A 79.7 g manipulator prototype for e-flap robot: A plucking-leaf application,” IEEE Access 10, 6530065308 (2022).10.1109/ACCESS.2022.3184110CrossRefGoogle Scholar
Zufferey, R., Tormo-Barbero, J., Guzmán, M. M., Maldonado, F. J., Sanchez-Laulhe, E., Grau, P., Pérez, M., Acosta, J. Á. and Ollero, A., “Design of the high-payload flapping wing robot e-flap,” IEEE Rob. Autom. Lett. 6(2), 30973104 (2021).CrossRefGoogle Scholar
Zufferey, R., Tormo-Barbero, J., Feliu-Talegón, D., Nekoo, S. R., Acosta, J. Á. and Ollero, A., “How ornithopters can perch autonomously on a branch,” Nat. Commun. 13(1), 7713 (2022).10.1038/s41467-022-35356-5CrossRefGoogle ScholarPubMed
Feliu-Talegon, D., Acosta, J. Á., Suarez, A. and Ollero, A., “A bio-inspired manipulator with claw prototype for winged aerial robots: Benchmark for design and control,” Appl. Sci. 10(18), 6516 (2020).10.3390/app10186516CrossRefGoogle Scholar
Feliu-Talegon, D., Acosta, J. Á. and Ollero, A., “Control aware of limitations of manipulators with claw for aerial robots imitating bird’s skeleton,” IEEE Rob. Autom. Lett. 6(4), 64266433 (2021).10.1109/LRA.2021.3093282CrossRefGoogle Scholar
Luque, P. S., Satue, A. C., Nekoo, S. R., Acosta, J. A. and Ollero, A., “Theoretical and Experimental Investigation on Body Control After Perching for Flapping-Wing Robots: Extending the Workspace for Manipulation,” In: International Conference on Unmanned Aircraft Systems (ICUAS),Warsaw, Poland (IEEE, 2023) pp. 948–955.10.1109/ICUAS57906.2023.10156510CrossRefGoogle Scholar
Feliu-Talegon, D., Nekoo, S. R., Suarez, A., Acosta, J. A. and Ollero, A., “Modeling and Under-Actuated Control of Stabilization Before Take-off Phase for Flapping-Wing Robots,” In: ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics, Volume 2,Zaragoza, Spain (Springer, 2022) pp. 376388.10.1007/978-3-031-21062-4_31CrossRefGoogle Scholar
Zhu, H., Li, H., Adam, A. P., Li, L. and Tian, L., “Performance evaluation of a multi-rotor unmanned agricultural aircraft system for chemical application,” Int. J. Agric. Biol. Eng. 14(4), 4352 (2021).Google Scholar
Koparan, C. and Koc, A. B., “Unmanned Aerial Vehicle (UAV) Assisted Water Sampling,” In: 2016 ASABE Annual International Meeting, Orlando, FL, USA (American Society of Agricultural and Biological Engineers, 2016) p. 1.Google Scholar
Jun, J., Seol, J. and Son, H. I., “A Novel End-Effector for Tomato Harvesting Robot: Mechanism and Evaluation,” In: 2020 20th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea (South) (IEEE, 2020) pp. 118121.CrossRefGoogle Scholar
Park, Y., Seol, J., Pak, J., Jo, Y., Jun, J. and Son, H. I., “A novel end-effector for a fruit and vegetable harvesting robot: Mechanism and field experiment,” Precis. Agric., 24, 948970 (2023).10.1007/s11119-022-09981-5CrossRefGoogle Scholar
Jun, J., Kim, J., Seol, J., Kim, J. and Son, H. I., “Towards an efficient tomato harvesting robot: 3d perception, manipulation, and end-effector,” IEEE Access 9, 1763117640 (2021).10.1109/ACCESS.2021.3052240CrossRefGoogle Scholar
Thielmann, S., Seibold, U., Haslinger, R., Passig, G., Bahls, T., Jörg, S., Nickl, M., Nothhelfer, A., Hagn, U. and Hirzinger, G., “Mica – A New Generation of Versatile Instruments in Robotic Surgery,” In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan (IEEE, 2010) pp. 871878.10.1109/IROS.2010.5649984CrossRefGoogle Scholar
Callaghan, D., McGrath, M. M. and Coyle, E., “Force Measurement Methods in Telerobotic Surgery: Implications for End-Effector Manufacture,” In: Proceedings of the 25th International Manufacturing Conference (IMC25), Dublin, Ireland (2008) pp. 35.Google Scholar
Jin, X., Feng, M., Han, Z., Zhao, J., Cao, H. and Zhang, Y., “Development of a mechanical decoupling surgical scissors for robot-assisted minimally invasive surgery,” Robotica 40(2), 316328 (2022).10.1017/S0263574721000564CrossRefGoogle Scholar
Tavakoli, M., Patel, R. and Moallem, M., “Haptic interaction in robot-assisted endoscopic surgery: A sensorized end-effector,” Int. J. Med. Rob. Comput. Assisted Surg. 1(2), 5363 (2005).10.1002/rcs.16CrossRefGoogle ScholarPubMed
Lee, S. C., Moon, H. G., Hwang, S. H., Shin, D. B., Baek, I. H., Sun, D. I., Ryu, J.-K. and Han, C. S., “Development of an assembled gripper for a hydraulic cutting machine with a novel design for the stable holding of various shaped objects,” Int. J. Precis. Eng. Manuf. 22(8), 14131424 (2021).10.1007/s12541-021-00472-7CrossRefGoogle Scholar
Morikawa, S., Senoo, T., Namiki, A. and Ishikawa, M., “Realtime Collision Avoidance Using a Robot Manipulator with Light-Weight Small High-speed Vision Systems,” In: Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy (IEEE, 2007) pp. 794799.10.1109/ROBOT.2007.363083CrossRefGoogle Scholar
AlAkhras, A., Sattar, I. H., Alvi, M., Qanbar, M. W., Jaradat, M. A. and Alkaddour, M., “The design of a lightweight cable aerial manipulator with a cog compensation mechanism for construction inspection purposes,” Appl. Sci. 12(3), 1173 (2022).CrossRefGoogle Scholar
Armengol, I., Suarez, A., Heredia, G. and Ollero, A., “Design, Integration and Testing of Compliant Gripper for the Installation of Helical Bird Diverters on Power Lines,” In: 2021 Aerial Robotic Systems Physically Interacting with the Environment (AIRPHARO), Biograd na Moru, Croatia (IEEE, 2021) pp. 18.Google Scholar
Bellicoso, C. D., Buonocore, L. R., Lippiello, V. and Siciliano, B., “Design, Modeling and Control of a 5-dof Light-Weight Robot Arm for Aerial Manipulation,” In: 2015 23rd Mediterranean Conference on Control and Automation (MED), Torremolinos, Spain (IEEE, 2015) pp. 853858.10.1109/MED.2015.7158852CrossRefGoogle Scholar
Barrett, E., Hoffman, E. M., Baccelliere, L. and Tsagarakis, N. G., “Mechatronic Design and Control of a Light Weight Manipulator Arm for Mobile Platforms,” In: 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) (IEEE, 2021) pp. 12551261.10.1109/AIM46487.2021.9517389CrossRefGoogle Scholar
Imanberdiyev, N., Sood, S., Kircali, D. and Kayacan, E., “Design, development and experimental validation of a lightweight dual-arm aerial manipulator with a cog balancing mechanism,” Mechatronics 82, 102719 (2022).10.1016/j.mechatronics.2021.102719CrossRefGoogle Scholar
Suárez, A., Sanchez-Cuevas, P., Fernandez, M., Perez, M., Heredia, G. and Ollero, A., “Lightweight and Compliant Long Reach Aerial Manipulator for Inspection Operations,” In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain (IEEE, 2018) pp. 67466752.10.1109/IROS.2018.8593940CrossRefGoogle Scholar
Low, J. H., Lee, W. W., Khin, P. M., Thakor, N. V., Kukreja, S. L., Ren, H. L. and Yeow, C. H., “Hybrid tele-manipulation system using a sensorized 3-d-printed soft robotic gripper and a soft fabric-based haptic glove,” IEEE Rob. Autom. Lett. 2(2), 880887 (2017).10.1109/LRA.2017.2655559CrossRefGoogle Scholar
Blösch, M., Weiss, S., Scaramuzza, D. and Siegwart, R., “Vision Based MAV Navigation in Unknown and Unstructured Environments,” In: 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA (2010) pp. 2128.Google Scholar
Thomas, J., Loianno, G., Sreenath, K. and Kumar, V., “Toward Image Based Visual Servoing for Aerial Grasping and Perching,” In: 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China (2014) pp. 21132118.Google Scholar
Tijmons, S., De Croon, G. C., Remes, B. D., De Wagter, C. and Mulder, M., “Obstacle avoidance strategy using onboard stereo vision on a flapping wing MAV,” IEEE Trans. Rob. 33(4), 858874 (2017).CrossRefGoogle Scholar
Konolige, K., “Small Vision Systems: Hardware and Implementation,” In: Robotics Research, Japan (Springer, 1998) pp. 203212.10.1007/978-1-4471-1580-9_19CrossRefGoogle Scholar
Chang, J.-H., Fan, K.-C. and Chang, Y.-L., “Multi-modal gray-level histogram modeling and decomposition,” Image Vision Comput. 20(3), 203216 (2002).CrossRefGoogle Scholar
Mahvash, M., Voo, L. M., Kim, D., Jeung, K., Wainer, J. and Okamura, A. M., “Modeling the forces of cutting with scissors,” IEEE Trans. Biomed. Eng. 55(3), 848856 (2008).CrossRefGoogle ScholarPubMed
Yang, T., Xiong, L., Zhang, J., Yang, L., Huang, W., Zhou, J., Liu, J., Su, Y., Chui, C., Teo, C. and S. Chang, “Modeling Cutting Force of Laparoscopic Scissors,” In: 2010 3rd International Conference on Biomedical Engineering and Informatics, vol. 4, Yantai, China (IEEE, 2010) pp. 17641768.CrossRefGoogle Scholar
Figure 0

Figure 1. The integrated system, manipulator, and camera attached in front of the flapping-wing flying robot, Arduino board in communication with NanoPI main processor. The leg of the bird holds the bird steadily on a branch for sampling.

Figure 1

Figure 2. The global picture of the flapping-wing robot operation for perching and manipulation. The launching, controlled flight, and perching phases are presented in ref. [8], and stabilization after perching is covered in ref. [11].

Figure 2

Figure 3. The CAD design of the system, that is, ML1 shows motor left 1 (94.1 g is the mass of the dual-arm manipulator and its structure; the camera, 30 g, and its holder are not a part of dual-arm weight distribution, see Fig. A.1 in Appendix for more details).

Figure 3

Figure 4. The kinematics and axes definition of the dual-arm manipulator.

Figure 4

Figure 5. Different configuration of the manipulators.

Figure 5

Figure 6. General scheme of the onboard vision processing system.

Figure 6

Figure 7. Execution of the stages of the vision processing scheme in one example: (a) input left and right images $\{I_{\textrm{L}}, I_{\textrm{R}}\}$ from the stereo vision system; (b) computed disparity map $D$ showing the disparity values with different colors; (c) resulting pixels $F$ after background removal; (d) vertical lines $\Lambda$ detected; and (e) reprojection on $I_{\textrm{L}}$ of the reconstructed 3D line $\lambda$ and cutting point $p_{\textrm{t}}$ (shown with a $\times$).

Figure 7

Figure 8. Disparity histogram in one example and approximation with two Gaussian distributions. The lower distribution is background, and the higher the stem of interest.

Figure 8

Figure 9. Accuracy evaluation of the vision-based detection method: top) estimated position (blue) when the stem described a 5 cm side squared trajectory (red); and bottom) error histogram (blue) and its corresponding probability distribution fitting (red). The position is defined w.r.t. the camera frame {C}.

Figure 9

Figure 10. The folded configuration of the dual-arm for shifting backward the center of mass of the robot.

Figure 10

Figure 11. The $Z$ axis (a) position of the robot in forward flight, reference 1.75 m, error 29 cm, and (b) velocity of the robot.

Figure 11

Figure 12. The 3D trajectory of the flight of the robot, carrying the dual-arm manipulator on top.

Figure 12

Figure 13. The snapshots of flight with the manipulator and camera in the indoor test bed. The white-wing E-Flap prototype has been used for the flight since the wing chamber of this robot bird provides more lift force for load-carrying capacity.

Figure 13

Figure 14. The snapshots of the results of the experiments with timestamps: (a) shows the detection of the stem, (b) shows the motion of the right arm towards the stem, (c) depicts the gripper action, (d) shows how the scissors move toward the stem, (e) and (f) demonstrate the cutting and opening of the scissors, (g) shows that scissors move to the left, and (h) shows that the gripper moves the sample to the right.

Figure 14

Figure 15. Indoor and outdoor manipulation tests.

Figure 15

Figure A.1. The dual-arm cooperative manipulator weight measurement, 94.1 g, without a camera.

Figure 16

Figure A.2. The exploded view of the designed dual-arm scissors manipulator.

Supplementary material: File

Rafee Neko et al. supplementary material

Rafee Neko et al. supplementary material 1

Download Rafee Neko et al. supplementary material(File)
File 73.1 MB

Rafee Neko et al. supplementary material

Rafee Neko et al. supplementary material 2

Download Rafee Neko et al. supplementary material(Audio)
Audio 47.3 MB