Vous êtes sur la page 1sur 12

CONTROL OF A SIX-JOINT ROBOT ARM USING THE TWO-AXIS MOTION

CONTROLLER NI PCI-7342_

by
Moataz Ghader
ABSTRACT
The project consists of a control system for a six-joint robot arm using a motion
controller that can control up to two axes only. The description below shows some
information about the robot arm and the techniques and the ways used to extend the
control to all the joints.
I- THE ROBOT ARM
“Pro-Arm RS 2200” is a 6-joint, 5 degrees of freedom robot arm. It is designed to
simulate industrial robot operations for laboratory or classroom training and education
research. It is a joint-coordinate type robot. It is so named because its movements
resemble that of human joints. It has six axes, and each axis imitates the movement of
one of the human’s mobility.

Figure 1- the PRO-ARM RS 2200 robot arm


Mechanics of the Robot Arm
The robot consists of 6 stepping motors that move the six joints. The following table
shows the motion specifications on each of the motors.

Motor No. Joint Degree Step Max movement stepping from zero position

Positive direction Negative direction

M1 Body 0.12 1000 1000


(clockwise) (counter clockwise)

M2 Shoulder 0.12 600 600


(Upward) (Downward)

M3 Elbow 0.1 500 500


(Upward) (Downward)

M4 Wrist 0.1 + , + (counterclockwise) Max 1800


M5* - , - (clockwise) Max 1800
+ , - (downward) Max 900
- , + (upward) Max 900

M6 Gripper 0.1 Close +1800 & Open -1800

* If M4 and M5 Input is (900, -900), the wrist will move downward 90º.
If the input is (-900, -900), the wrist will turn clockwise 180º.
When M4 > M5 or M4 < M5, movement of bending and turning will act
simultaneously.

Controlling the Robot Arm


The Pro-Arm RS 2200 is an old robot that uses the DOS Basic and Assembler
programming languages. Its basic controller is the Zilog Z-80A microprocessor. Since
now we can barely find computers with the old operating system DOS, the original
controller of the robot arm is almost considered useless; and if in anyway such system is
found, any addition to the robot that can be done to develop its functions would be a big
burden to supply the elements compatible with the old system.
From all these circumstances rose the idea of a new control system that is compatible
with nowadays computer systems and platforms, and capable of being updated and added
new functions.
The controller that we used is the National Instruments PCI- 7342 motion controller. And
for programming tasks, the most suitable choice was the National Instruments LabVIEW.
Robot Arm Kinematics
It is a common practice for the analysis of the robot arm system to define a world
coordinate system and a local coordinate system attached to each joint. This latter
coordinate frame moves with the link. The world coordinate system may be a Cartesian
coordinate system whose origin is fixed to the base. The relationships between different
coordinate frames are described by functions that are static in the sense that there are no
variable derivatives of time explicitly in the equations.
Since the controller we are using consists of only two axes, it is useless to set coordinate
frames for all the joints. Instead, shoulder and elbow joints were used to set the idea of
simultaneous motion. Other joints work independently.

Coordinate Frame of the Body Joint (Ground Plane Joint)


This joint, as been said above, works independently. It is given a coordinate frame that
lies on the ground plane. Its origin can be considered coinciding with the origin of the
base frame shown above. It is positioned at the center of the robot’s width. Figure 2
shows this coordinate frame.

Figure 2- coordinate frame for the body joint

θ is the angle that the body joint makes with the x-axis. In the control software, user
inputs the position on the plane in Cartesian coordinates form. It is then changed to
angular form through the arctangent function, and input to the body joint.
Coordinate Frames for shoulder, elbow and gripper joints
(Vertical Plane Joints)
A world coordinate frame is implemented at the base of the robot arm, and three local
coordinate frames were hanged on the two joints, shoulder and elbow, and on the gripper.
The focus lies on the shoulder and elbow joints, and we already know that they rotate
about the axes normal to the plane they make and passing by their origins. These axes are
parallel and considered the Z-axes according to the D-H algorithm for assigning
coordinate frames. In calculation, we only consider the two coordinate axes, X and Y of
each joint for the purpose of simplifying the matrices and equations. This idea is clarified
more in figure 3.

Figure 3- coordinate frame for shoulder, elbow and gripper joints

II- HARDWARE ARCHITECTURE


The system of the Robot Experiment consists of the following hardware parts:
- The 6-joints Robot Arm
- A computer with PCI slot
- NI PCI-7342 Motion Controller Board
- UMI 7764 accessory (Universal Motion Interface)
- 68VHDCI Cables for motion an digital I/O connectors
- 68VHDCI-50F SSR Cable Adapter
Figure 4 shows the hardware components and connections as an architecture overview of
the system.
Figure 4- Hardware Architecture

III- DESIGN OF THE DRIVER BOARD


The motion controller features only two axes, each outputs two lines: step and direction.
But looking for the robot, it is actuated by 6 stepper motors, each having 4 phase lines.
Here we find ourselves in front of a challenge of controlling 6 actuators with two control
axes. On the other hand, the controller does not feature any amplifier or power driver,
making it impossible to count on the controller to power the robot actuators.
For all these reasons, we had to implement an interface between the controller and the
robot arm’s motors, which at the same time supplies the robot with the required power.
Below are the steps followed to do the design of the driver board.
The driver board consists of 6 stepper driver units, each connected to one motor of the
robot arm. Since the 7342 controller board consists of only two axes, the board is
designed to receive only two step input lines coming from the step output of UMI 7764,
each connected to three driver inputs.
To prevent any unexpected error in the driver’s voltages and currents that may damage
the controller, the two circuits are isolated using optocouplers.
Stepper Driver Design
The stepper driver has the following inputs:
- One select line to activate or deactivate the motor.
- One step input that receives the required number of pulses at a required frequency
depending on the set distance and speed respectively.
- One direction input indicating whether to rotate clockwise or counter-clockwise.
The stepper motor has 4 phases (2 center-tapped inductances); meaning that the step
pulses must be distributed to 4 sequenced lines in order to move the motors. For this we
needed to program a microcontroller that reads the pulses and direction, and output them
in the correct sequence.
Therefore, each driver consists of the following parts:
- Three optocouplers to isolate the input circuit from the driver circuit
- PIC16F630 microcontroller
- Four power transistors
Figure 5 shows the block diagram of a driver circuit.

Figure 5- driver block diagram

Figure 6- The driver board


IV- SOFTWARE ARCHITECTURE
Our software is designed using the Client-Server architecture, where the server is at the
local place of the experiment and hardware, and the client could be any user connected to
the server through the network.
The network technology used is the DataSocket server technology, supplied with
LabVIEW.
Figure 7 shows the general block diagram of the software architecture for the robot
experiment program.

Figure 7- software architecture of object transport application

V- Graphical User Interface


Figure 8 shows the user interface of CLIENT VI:

Figure 8- Graphical User Interface

As the figure shows, the user interface consists of five parts.


a- Control and Status
This part includes three control buttons and two status leds.
The first button, “Start Job” orders the robot arm to start doing the require moves.
The second button, “Change Parameters”, browses a new VI where the user can change
input data for the experiment concerning velocity, acceleration, object and obstacles
(discussed below).
The third button is the stop button used to halt the motion of the robot.
Concerning the leds, the first led tells whether the robot is busy or not. If it is, the led
blinks and shows the clause “Currently in Job”. Otherwise, it is off and shows the word
“Ready”.
The second led is the job status led. When the robot is in job, the led is lit and shows what
step the robot is performing actually. Else, it is off and shows “No Job”

9 steps constitute the job of the robot arm:


1- Rotating body towards object
2- Moving gripper down to object
3- Picking object by gripper
4- Lifting object and avoiding obstacles
5- Rotating object towards final position
6- Getting object down
7- Releasing object
8- Lifting arm to initial height
9- Rotating body to initial position
b- Gauges
Velocity and position of each joint of the robot can be monitored through the gauges on
the left. They are updated at every instance so that they show the behavior of motion in
real time.
c- Robot Arm Image
an image showing the robot arm consisting of the two links and gripper. It is updated
with the position of the robot arm to simulate its motion in real time.
d- Graph Chart
This chart plots the speed profile and trajectory of body, shoulder and elbow joints at
each step of the job. It shows one plot at a time. The user can choose what plot to show
through the control shown at the upper right of the screen.
e- Velocity Data
Simultaneously with the graph chart, shows the estimated peak velocity, measured peak
velocity, and velocity override of each of the mentioned joints at each step. The same
controls at the upper right of the screen choose the joint and step to show the velocity
data for. Velocity override is the ratio of the measured velocity to the estimated velocity.

VI- CHANGING PARAMETERS


As been said above, user can press the “change parameters” button to change any data
concerning the input parameters for the experiment. Once pressed, CLIENT DATA
OUTPUT VI is open. Figure 9 shows the front panel for this VI.

Figure 9- front panel for CLIENT DATA OUTPUT

As shown in the figure, user can choose any of the five buttons to change parameters
according to the category concerned. When configuration is finished, the OK button is
pressed and the Change Parameters front panel closes.
Changing parameters of each category is discussed below.
Change Object Parameters

Figure 10- Changing object parameters


Object parameters consist of:
- Dimensions: its values are used in calculating the path in case of existence of obstacles.
- Initial Position: where the object initially.
- Final Position: where the user wants to place the object finally.

Change Obstacle Parameters

Figure 11- Changing obstacle parameters

User can set more than one obstacle, and the software calculates the path so that all
obstacles are avoided in condition that all of them are entered in the “obstacle
parameters”.
As seen in figure 11, obstacles are entered in an array where each element consists of
data concerning their position and dimensions.

Change Joint Parameters


Whatever the joint is, user selects its category in the options list and parameters to be
changed will appear as in figure 12.

Figure 12- Changing Joint Parameters


Just by entering the maximum velocity and acceleration of a joint, the motion controller
calculates its profile parameters depending on the distance to be traveled by this joint.
In the case of Shoulder and Elbow parameters, velocity and acceleration values are not
for each of the two joints, but for the vector space composed of the two joint variables,
shoulder and elbow.
Server Name
In addition to adjusting the parameters of the experiment, the CLIENT DATA OUTPUT
also features a data field labeled “Server Name” where we must enter the URL of the
server in case of connection to network. In the case of accessing the experiment on the
server computer, ‘localhost’ must be entered.

Description of Software
A job of the robot arm consists of 9 steps listed above. Each step concerns a part of the
robot. Specifically, steps 1, 5 and 9 concern the body joint; steps 2, 4, 6, and 8 concern
shoulder and elbow joints; finally, steps 3 and 7 concern gripper joint.
The basic idea of the server program is using an incremental counter, called a status
counter, as an indicator of the job phase. It is incremented every time one of the four
events occur: Start Job is true, move complete 1 is true, move complete 2 is true, or move
complete 3 occurs, where move complete events represent the end of motion for body,
shoulder & elbow, and gripper respectively.
This counter controls a case structure: cases 1, 5 and 9 activate body motion; cases 2, 4,
6, and 8 activate shoulder & elbow motion; cases 3 and 7 activate gripper motion.
Since each of the controller’s axes runs more than one motor, absolute position mode is
impossible to use. Instead, relative position mode is chosen; and the axis is reset to 0
position at each time it is called to move. We then used variables that memorize the last
stop of each joint. Thus, the last stop value is subtracted from the target position and the
result is loaded to the axis.
Figure 13 shows the flow chart describing briefly the behavior of the robot arm job.
Begin

Job Status = Ready


Inc: = 0

No
Start Job = true?

Yes

Job status = currently in job


Inc++

Yes
Inc ≥ 10

No
Reset Axis1 & Axis2 positions Reset Axis1 position
Read shoulder & elbow target positions 2, 4, 6, 8 1, 5, 9 Read body target position
Read Body velocity & acceleration Inc =? Read Body velocity & acceleration

3, 7
Loaded target position2: = Reset Axis2 position Loaded target position1: =
Shoulder target position – last stop 2 Read gripper target position, Body target position – last stop 1
Loaded target position3: = Velocity & acceleration
Elbow target position – last stop 3
Start Motion
Start Motion Loaded target position4: =
Gripper target position – last stop4
No
No Start Motion Move complete1: =
True?
Move complete2: =
True?
No Yes
Yes
Move complete3: = Last stop1: =
Last stop2: = True? Last stop1 + current position1
Last stop2 + current position2
Last stop3: = Yes
Last stop3 + current position3
Last stop4: =
Last stop4 + current position4

Figure 13- flow chart of the robot arm job

Vous aimerez peut-être aussi