Precise motion control is a challenging and important goal in the application of mobile robots. The mechanical structure of a novel mobile robot is presented. Using the support vector machine learning control method in statistical theory, the human control strategy is represented by the parametric model without knowledge of the actual robot mathematical model. Moreover, using the learning controller, the position motion control experiments of the robot are carried out. The results of the experiments show that this learning control method is feasible and valid for the precise position control of the mobile robot, and the maxim error is less than 32 cm in a 10 m linear movement.
- learning control
- support vector machine
- mobile robot
The control system and mechanical parts of spherical mobile robots are encapsulated in a round shell. Using the outer round shell, spherical mobile robots can make themselves move around. Mechanical protection for the electrical equipment and actuation parts is provided by the hard shell. Compared to the wheel and caterpillar track robot, spherical mobile robots have significant advantages such as low friction resistance, high motion flexibility and the capability for omnidirectional movement. It is very important that the spherical robot can maintain or resume stability when collision occurs. Spherical robots are very suitable for use in rugged terrain and harsh environments .
Recently, many kinds of spherical mobile robots have been developed and several practical applications are being studied by research institutes and universities [2,3,4,5,6,7,8]. The robot makes contact to the ground with the help of the round shell at a single point, and is driven by the inner actuation mechanism. The precise position control of these kinds of robots is a challenging problem for the application.
Based on learning the human operator control process, an approach of position control for the spherical mobile robot is presented. The contents and framework of the paper is as follows: In section 2, the whole mechanics of the robot is introduced. The theory and usage of support vector machine (SVM) for robot control is introduced in section 3. Using SVM learning control method, the process of human control strategies is parameterized for position control of the robot in section 4. Experiments are implemented to prove the validity of the learning controller in section 5. The last section draws the conclusions.
The mobile robot mainly consists of a round shell, actuation mechanical parts, inner case, a flywheel and a telescopic camera boom. The shell is made of 10 mm thick organic glass and the diameter is about 600 mm. The actuation mechanism, motors, battery, controller and sensors are all encapsulated in the shell. The motion and standing modes of the robot are given in Figures 1 and 2.
Three separate motors are included in the inner actuation mechanical parts: one flywheel motor, one long-axis serve motor and one short-axis serve motor. The main axes of the short-axis and long-axis motors are perpendicular. During motion, the forward and backward driving forces of the robot are generated by the long-axis motor. The long-axis motor can swing the counterweight directly through the inner case. The angle of the counterweight relative to the axis of the shell can be controlled by the short-axis motor to the yaw angle of the robot. The flywheel motor drives the internal flywheel spinning at a desirable high speed to increase the angular momentum of the robot based on the gyroscopic precession principle.
The mobile robot is powered by a +48V lithium battery and is self-contained. The control algorithm is programmed and performed on a control board made of an ARM9 processor S3C2410. The interface and communication data between the ARM9 control board and the motor controllers follow the CANopen device and communication profile. An inertial measuring unit is installed on the inner case, and it can provide roll, yaw and pitch angles and velocities of the inner case with respect to ground. The motion state of the robot such as rolling velocity, yaw and pose angles and the data required for the learning control are provided by the inertial measuring unit and motor controllers. The operator commands are transmitted by the data wireless radio.
The basic concepts of SVM used this paper are presented by the prior research. Using SVM, the
In Eq. 2,
The polynomial basis function is used as the kernel function
During continuous human operation, the control process is viewed as a discrete time sampling process and can be depicted by the following equation:
Using sample dates during the human control process, the learning controller can be constructed by the support vector regression (SVR) method. Using the current state of robot and control input
The difference Eq. 5 can approximately describe Figure 3 as follows:
The human operator can be good at operating the movement of mobile robots. The learning control method in this paper is to model the human operator control process. Based on the synchronous states of the robot, the most likely command is selected to represent the human operator control behaviour [9, 10].
The human control strategy can be measured as a stochastic process. The human control strategy can be mapped between the states of the robot and the operator input commands: First, the human control output data and current states of robot is gathered; second, the SVM learning method is used to model the human control operating and the human control parameters are stored for the task; third, the learning controller is set up by the offline learning computing; and finally, the learning controller is implemented on the central controller in the robot. This method can simulate the control strategy of the human operator to the robot as described in Figure 4.
The SVM-based learning control strategy learning procedure can be divided into three stages: Training sample gathering stage, SVM off-learning stage and control strategy realization stage. Every step of the SVM-based learning control strategy is summarized as follows in detail, and the entire control diagram is as described in Figure 5:
Human operator control of the robot finishes the assigned position motion, and in this procedure, the human control input and the states of the robot, including angular velocity of the shell and inner frame, leaning angular and so on, are collected. Choosing the polynomial kernel and an SVM learning machine for characterizing the human control strategy. Encoding and implementing the control strategy on the central controller of the robot. Measuring the runtime of the controller, and if the runtime is too long, then, return to step 1 and relearning. Finally, by means of experiment, verifying whether the position control task can be achieved by this learning controller; if not, return to step 1, and re-learning.
Human operator control of the robot finishes the assigned position motion, and in this procedure, the human control input and the states of the robot, including angular velocity of the shell and inner frame, leaning angular and so on, are collected.
Choosing the polynomial kernel and an SVM learning machine for characterizing the human control strategy.
Encoding and implementing the control strategy on the central controller of the robot. Measuring the runtime of the controller, and if the runtime is too long, then, return to step 1 and relearning.
Finally, by means of experiment, verifying whether the position control task can be achieved by this learning controller; if not, return to step 1, and re-learning.
The learning control algorithm is programmed in the ARM9 control board to test and verify the learning control strategy. The experiment is implemented on the flat plastic runway. First, the human operator uses the hand-joy stick to control the robot, and different sensor data for SVM learning is obtained from the motor controller and IMU through the CAN bus wire. Figure 6 shows the experiment environment.
In the experiment, the control period is 100 ms. First, the human operator controls the robot to finish the linear displacement of 10 m by hand-joy stick, while the final and initial velocities of the robot are zero. During this process, the velocity of the shell
The angular displacement of the shell
The error of the SVM-based learning Δ
After the encoding learning controller on the central ARM9 processor of the robot, the robot can achieve the linear displacement motion smoothly driven by the motor driver, which is directed by the output of learning controller.
The result of 10 times of repeating the experiment is 10.13, 9.75, 10.32, 10.27, 9.85, 10.15, 10.36, 10.21, 9.88 and 10.06 m. The maxim error of the linear displacement is about 32 cm, and the error is mainly generated by the change in the friction and lean of ground or other random things, which may cause the robot to depart from the straight line.
The experiment result shows that the usage of the learning-based controller can achieve the goal of the precise linear displacement for the mobile robot.
This paper presented an SVM-based learning position control strategy for an omnidirectional rolling mobile robot. The mechanical and motion principle of the robot is described. The theory and implement of SVM for robot control is introduced. A learning controller which simulates the human control strategy is designed for the position control of this robot. The feasibility of the learning control method is validated by several experiments on plastic ground. The effectiveness of the position control methods is implied by the results of the experiment.
This learning position control strategy relies on the experience model. The error in computation can arise due to uncertainty and change in the environment. In order to realize the field exploration by spherical robots, this proposed learning control methods needs to be enhanced.