Vous êtes sur la page 1sur 22

319

Chapter 8
Software
Development
Issues

A Summary...
A short chapter covering the basic issues regarding the development and selection of
software for interfacing problems. Real-time operating systems, multi-tasking, multiuser systems, windows environments and their relationship with object-oriented
programming (OOP). The user interface.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to
Analog
Conversion

Scaling or
Amplification

Isolation

Energy
Conversion
External
System

Computer
Analog
to Digital
Conversion

Protection
Circuits

Scaling or
Amplification

External Voltage Supply

Isolation

Energy
Conversion

320

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

8.1 Introduction
Those who relish the development of electronic hardware often place insufficient
emphasis upon the software that needs to drive that hardware and conversely, those
who specialise in software development tend to place insufficient emphasis on the
functionality of the total system. The development of a modern mechatronic system
requires a balanced approach and a recognition of the obvious point that the end system
must be a cohesive union of electronics, mechanics and software.
Most modern computers are themselves mechatronic systems and in Chapter 6,
we began to look at the interrelationship between the electronics (digital circuits),
mechanics (disk-drives, key-boards, etc.) and software (operating systems, executable
programs, etc.) that are combined to generate a very effective development tool.
However, as we now know, the computer can often become a single building block
within a larger mechatronic system involving other devices such as motors, relays, etc.
In this chapter, we will examine the software aspects of mechatronic design. We begin
by reviewing two of relevant diagrams from Chapter 6. Figure 8.1 shows the basic
hardware and software elements that are combined to form a modern computer system.

High Level Language


Compiler

Assembler

Executable
Program 1

Executable
Program N

Operating System

Address Bus
Data Bus
ROM
(BIOS +
Bootstrap)

Keyboard
Interface

Graphics
Controller

Disk-Drive
Controller

Memory
CPU

Interrupt
Controller

Clock

Disk-Drive

Keyboard
Monitor

Figure 8.1 - Basic Hardware and Software Elements Within a Computer System

Software Development Issues

321

Figure 8.2, also reproduced from Chapter 6, shows the interrelationship between
software entities in a little more detail.

Assembler

High Level Language


Compiler

Executable
Program 1

Executable
Program N

Software

EndUser

Operating System

Software/
Hardware
Interface

BIOS and Bootstrap Software

Disk-Drive
Controller

CPU

Graphics
Controller

Keyboard

Memory

Hardware

Figure 8.2 - Interfacing Hardware and Software via an Operating System

This chapter is really about the decision making process that one needs to
develop before selecting various software elements that are required for a mechatronic
application. This is not a chapter designed to teach you computer programming or
about all the intricacies of a particular operating system - there are countless good text
books on such subjects already.
In order to set the framework for further discussions, we need to develop some
sort of model for the type of systems that we will be discussing. Figure 8.3 shows the
basic arrangement with which many will already be familiar. For any given set of
computer and interface hardware, one is left with the software elements that need to be
selected:

The operating system for the computer


The development language for the interfacing software
The development language for the application software
The type of user interface to be provided by the application software.

This is not to suggest that these issues should be decided upon independently of the
hardware selection. In fact, although the hardware is most likely to be implemented
first, both software and hardware should be jointly considered in preliminary design
stages.

322

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Computer
Application Software

Operating
System

External
System

Interface
Software

Interface
Hardware

Figure 8.3 - The Basic Closed-Loop Control Elements

If one was designing a system of the type shown in Figure 8.3 during the 1970s,
then the hardware and software selection issues could be readily resolved fundamentally because there were few options from which to choose. A closed-loop
control system would have typically been implemented on a Digital Equipment
Corporation PDP-11 computer because it was one of few capable of providing
input/output channels - the choice of computer then defined the range of operating
systems, the operating system defined the types of programming languages available
and the interface cards were normally provided by the computer manufacturer, thereby
defining the software interfacing. However, in recent times, the number of options has
expanded quite dramatically and so one has to ensure that the broad range of
possibilities is examined before committing to one particular hardware/software
architecture.

Software Development Issues

323

8.2 Operating System Issues


Operating systems have meandered through a number of different phases since
the 1960s to reach the point at which they are today. In fact, the phases have included
multi-tasking-multi-user systems (mainframes and mini-computers), then singletasking-single-user systems (PCs) then multi-tasking-single-user systems (workstations)
and more recently, multi-tasking-multi-user systems (PCs and workstations).
Early operating systems were primarily designed for large mainframe and minicomputer systems in banks, airline companies, insurance companies, etc. The
emphasis there was to ensure that many users could access a large central computer and
hence the systems were based upon multi-tasking-multi-user operating systems.
Another key feature of such systems was that they had to provide security between
application tasks and between the multiple users to ensure system reliability and
integrity of data. We shall however, classify these operating systems as "generalpurpose" in nature, since they are not targeted towards one specific field. They provide
an office-computing environment where the response times are generally designed
around the human users.
In the 1970s, companies such as Digital Equipment Corporation (DEC) realised
the growing need for control computer systems and also determined that generalpurpose operating systems were not directly suitable for control applications.
Companies such as DEC developed operating systems which were referred to as being
"real-time" in nature and suitable for specialised control applications. A typical
example was RSX-11M. Such operating systems were initially implemented on minicomputers and still provided multi-tasking-multi-user environments, but here the
emphasis was on the performance of the control tasks rather than the human operator at
the keyboard. At the same time, the general-purpose multi-tasking-multi-user operating
systems continued to flourish and the most notable of these was the UNIX system
developed at Berkeley University.
The wide-scale acceptance of the PC in the 1980s undoubtedly surprised even the
original designers (IBM) and software manufacturers. Initially, the intention was that
the PC was to be used as either a stand-alone device in the home or office or as a
"dumb" terminal to a mainframe. The operating system requirements were apparently
minimal and so the single-tasking-single-user Microsoft DOS system came into being,
essentially as a subset of the increasingly popular UNIX system. The major restrictions
of early DOS versions included a 640 kB memory limit and an inability to cope with
large hard disk-drives. Had the designers known then what we know now then perhaps
they would have created an operating system more amenable to migration over to the
more sophisticated UNIX parent system. However, in the 1980s, the market was still
heavily segmented, with mainframes, minicomputers, workstations and PCs each
holding a distinct niche.

324

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

As it eventuated, the inadequacies of the PC in the 1980s were not resolved


directly through the improvement of the DOS system (since this was still acquiring an
increasing market share) but rather through the introduction of workstations which were
a half-way house between the mini-computers and the PCs. The workstations tended to
have more powerful CPUs than the PCs and, unrestricted by the limitations of DOS,
were able to run software traditionally run on mini-computers and mainframes particularly because workstations followed the UNIX operating system path.
Nevertheless, the wide-scale support for PCs in the 1980s led to a number of third party
companies developing improvements to DOS or alternatives to DOS that would enable
PC users to expand the capabilities of their machines, particularly in the area of realtime control. The most common improvement/alternative to DOS was an operating
system that provided multi-tasking so that the computer could be used for control
purposes. Two examples are the QNX system (a scaled-down real-time version of
UNIX for PCs) and Intel's IRMX operating system which was designed for real-time
control.
Although workstations in the 1980s tended to be based upon the UNIX operating
system, a clear new trend had also emerged in this area. Workstation operating systems
had abandoned the old-fashioned text-based user interfaces in favour of an environment
composed of graphical, on-screen windows, with each window representing one active
task or application of interest. This interactive type of graphical environment,
originally developed by the XEROX corporation at their Palo-Alto-Research-Centre
(PARC), made the human task of interacting with the operating system considerably
easier because it was based on the use of graphical icons, selected by a mouse or
pointer. The same environment was exploited by the Apple corporation in their
computers.
In many instances, the advent of the window environment was seen to be a
simple adjunct to the older operating systems, but its ramifications proved considerably
greater. In the 1970s, programmers tended to develop software with text-based user
interfaces. These proved to be rather tiresome and if poorly designed, made data entry
considerably more difficult than it might otherwise need to be because entire sections
of text or data often had to be re-entered when incorrect. This was improved upon
considerably by the adaptation of the XEROX interactive environment in the 1980s,
where software developers designed programs with pull-down menu structures.
Typical examples of this were found in DOS applications such as XEROX Ventura
Publisher and numerous other third-party packages. However, a fundamental problem
still existed - that is, all application programs had different ways of implementing the
same, common functions. Although experienced users could quickly come to terms
with software based on pull-down menus, most novices had great difficulty moving
from one package to another.

Software Development Issues

325

The windows operating system approach tends to unify the user-interface side of
the applications that run within the environment. It does so because it provides
considerably better development tools to software houses. Older operating systems
provided only bare-bones functions - that is, the basic interface between the computer
hardware and the human user. Windows operating systems provide the basic interface
and additionally, an enormous range of graphical, windowing and interactive text and
menu functions that are not only used by the operating system, but can also be used
called up from libraries by user programs. Thus, the source code for a high-levellanguage program developed for a windows environment is likely to be shorter than an
equivalent piece of code for a non-windows environment - the windows program makes
use of standard user-interface libraries while the latter requires the complete set of
routines to be developed. As a consequence, windows based programs all tend to have
a familiar appearance and operation, thereby minimising the learning curve.
The Microsoft corporation endeavoured, in the late 1980s, to transfer the benefits
of the windows environment to PC level. It did so by superimposing a package, which
it named MS-Windows over the top of the executing DOS system. This was a
significant step because it provided a migration path for the enormous DOS market to
move to a windows environment. The MS-Windows system took the single-taskingsingle-user DOS system and converted it into a multi-tasking-single-user system. As
one can imagine, this was a substantial achievement but it could only be seen as an
intermediate measure, designed to wean users and developers away from DOS, to the
extent where DOS could be removed from beneath the windows environment. This is
certainly the case with the more modern windows operating systems for PCs, which
more closely resemble workstation operating systems (ie: UNIX based) and can, in fact
be ported to a range of different hardware platforms.
At the same time, it has become evident that the lifespan of the mainframe
computer system is now very limited. Advances in processor performance, networking
and operating systems have considerably lessened the need for high-cost mainframe
systems, to the extent where they are gradually being phased out. The end result is that
for the next decade, we will live in an era where similar or identical operating systems
will reside on a range of different hardware platforms and those operating systems will
provide a much greater degree of functionality for end-users and user-interface support
for software developers.
It is important to keep this historical perspective in mind in terms of selecting an
operating system for control purposes. Certainly, there are short-term and direct
requirements such as real-time performance that cannot be overlooked. However, there
are also political factors, such as the widespread support of the operating system in
terms of development tools and so on that need to be considered.

326

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In a control environment, there are typically a number of tasks that need to run
concurrently and this tends to limit the choice of operating system. In particular,
referring to Figure 8.3, a number of tasks can make up the final application:
(i)

A task that takes in data from the hardware interface and places it into
memory (variable storage locations) where it can be used by other tasks the same task may also take data from memory (variable storage locations)
and transfer it to relevant registers that cause the interface to output the
information

(ii)

A task that reads the input variables acquired by (i) and processes them
through a control algorithm, thereby generating output variables which are
passed back through (i)

(iii) A task that is responsible for interacting with the system user, displaying
system state and enabling the user to change parameters or stop the system
as and when required.
Over and above these tasks, there are other tasks which the operating system may be
running for general house-keeping or because they are initiated by a system user. Since
computers only have a limited amount of memory, and most multi-tasking operating
systems use paging, it is possible that any one task will be switched out of memory and
placed onto disk for a short period of time.
A number of issues need to be resolved before the operating system can be
selected to perform such a control task. The questions that need to be resolved are:

Can we be assured that the timing of inputs and outputs in (i) is


deterministic? In other words, can the operating system provide a
mechanism whereby the time between task (i) issuing a command to change
a variable and the actual change of that variable is well defined?

Following on from the point above, is there an accurate system clock that
can be read on a regular basis to ensure appropriate timing? In generalpurpose operating systems, when a program makes an operating system call
requesting the current time, the actual time is not passed back to the
program for a lengthy (in control terms) period, thereby making the figure
meaningless

Can tasks (i) - (iii) be allocated relative priorities in the overall operating
system, so that they always receive a fixed proportion of the CPU's time?

Do we know whether the important control tasks will always remain in


memory, or will they be paged to and from disk, thereby potentially
slowing down an important, real-time control function?

Software Development Issues

327

A number of general-purpose operating systems cannot provide the levels of


support that will satisfactorily address the above questions. In older versions of UNIX,
for example, I/O (such as serial communications, etc.) was handled in the same way as
files were handled. This meant that all requests for I/O were queued along with
requests for file handling, thereby creating potential problems for important control
signal input/output. On the other hand, however, even a general-purpose operating
system can be used for real-time control functions, provided that the system designers
are aware of the limitations. Many control computers are dedicated devices and so it is
possible to minimise or eliminate undesirable characteristics from a general-purpose
operating system. For example, in a system where I/O is handled in a queued fashion,
along with files, it may be possible to ensure that file handling is minimised or
eliminated while the control system is running.
Real-time operating systems generally address all the questions cited above,
primarily because those issues are the basis of their fundamental design. However, a
drawback of real-time operating systems is that they are specialised, and as a result,
have a much lower market share than many general-purpose operating systems. This
has important ramifications for system designers. In the short term, it means that
development tools (compilers, etc.) for real-time operating systems may be less
sophisticated and more costly than those offered for general-purpose operating systems.
It also means that the computer selected for control purposes is more likely to become
an "island of automation" because of the difficulty of transferring or porting software
or information between it and other general-purpose systems.
In the long term, the lower market share of real-time operating systems is more
significant. It ultimately means that there is less money invested in such systems for
improvement and development of tools than there is in general-purpose systems. The
end result can be that the long-term improvements in general-purpose operating
systems can ultimately provide better performance than the real-time systems and also,
that the real-time systems are discontinued and designers are left to port software back
to the general-purpose platform.
As a result of the above points, one needs to formulate an operating system
selection strategy that is both technical and political, if one is to have a product that is
viable in the long-term. A logical approach to consider in selecting an operating
system is to begin with the highest-volume, lowest-cost, general-purpose operating
system available. If this is capable of performing the required task, keeping in mind
the sort of questions raised above, then one should pursue this course. However, if it is
not capable of performing the task, and it is not possible to compensate for software
inadequacies with higher performance hardware, then one needs to pursue more
specialised systems, again, commencing with those that are most widely accepted.

328

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Thus far, we have not examined the possibility of using single-tasking-single-user


operating systems for control purposes. In fact, the Microsoft DOS system has been
successfully used in many applications for both monitoring and control functions.
There are several techniques that can be used to enable a single-tasking operating
system to carry out real-time control. These include:

Polling techniques built into the control program


Interrupt programming techniques
Distributed control techniques.

The polling and interrupt programming techniques have already been discussed in 6.7
and have both been widely used in control systems development based upon PCs. The
distributed control technique has arisen because of the ever decreasing cost of
processing power that enables interface cards to be developed with on-board
processors. This sometimes means that the personal computer provides little more than
a front-end user-interface, while the bulk of the control work is done by microprocessor
or DSP based interface cards. A typical scenario is shown in Figure 8.4. In effect
however, the system is actually multi-tasking because each intelligent interface board is
running one or more tasks (eg: PID control loops as shown in Figure 8.4).

Personal Computer
SingleTasking
Operating
System

Application
Program
(User I/F)

Interrupt-Driven Interfacing Software

Intelligent
Interface 1
(PID Control)

External
System 1

Intelligent
Interface N
(PID Control)

External
System N

Figure 8.4 - Using Distributed Control Based on Intelligent Interface Cards

Software Development Issues

329

8.3 User Interface Issues


The user interface, frequently underestimated by engineers, is probably one of the
most important parts of a mechatronic system. It largely determines:

The user-friendliness of the system


The learning curve associated with using the system
The outward appearance of the system and hence, its market appeal.

Moreover, the user-interface can affect the operation of the system because it assists in
the accurate entry of data. There are essentially five different types of user interface,
reflecting different phases of computer development since the 1960s. These are:
(i) Holorith punch card input and line-printer output
(ii) Line-oriented input via keyboard and output via text screen
(iii) Simple menu selection input via keyboard and cursor keys and output using
text or graphics screen
(iv) Pull-down menu selection input via keyboard, cursor keys and mouse and
output using text or graphics screen
(v) Full interactive graphics environment with pull down menus, graphical
icons, etc. - input via mouse, keyboard and cursor and output via graphics
screen using multiple window formats (as exemplified in Figure 8.5).

Figure 8.5 - Typical Screen from Microsoft Corporation Word for Windows

330

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The Holorith card system has effectively been obsolete since the late 1970s and is
no longer in use. The line-oriented text input/output system was originally used on all
levels of computer but is now becoming obsolete, although still used in some
mainframe applications. User interface types (iii) to (v) have all been used extensively
on PCs and workstations, with type (v) interfaces currently the industry standard.
The major difference between the type (v) user interface and all the others is that
the framework for the interface and the executable code for many of the functions is
becoming an integral part of modern "window" operating systems. Those developing
software in the windows formats often do so with cognisance of other common
packages. This enables people to keep common features (such as file handling)
operating in a similar way over a wide range of different software applications. For
example, Figure 8.5 shows the user interface from the Microsoft corporation's Word for
Windows version word-processing system. Figure 8.6 shows the user interface from
Borland International's Turbo Pascal compiler for Windows. Note the similarity
between common functions such as File, Edit, Window, Help, etc. The tools provided
by various windows environments to create such software don't restrict the software
developer to such formats but they do make it easier for the developer to create similar
functions - particularly file handling, help, scrolling, etc.

Figure 8.6 - Typical Screen from Borland International Turbo Pascal for Windows

Software Development Issues

331

There is much to be said for developing control and other engineering


applications that follow common software trends, such as those provided in wordprocessors, spread-sheets, etc. If system users sense some familiarity with a new piece
of software then they are more likely to use and explore that software systematically.
This should minimise unexpected results and damage to physical systems.
Since the mid-1980s, most software houses have created programs that enable
data entry to occur in a word-processing type format - in other words, the defacto
standard technique is to enter data only once and thereafter, correct only the portions
that have been mis-keyed or entered incorrectly. Compare this to the old, line-oriented
technique where incorrect data had to be completely re-entered - as often as not, an old
mistake was corrected and a new mistake created. The software development tools
provided in a windows environment are all designed to facilitate the "correct only
incorrect portions" technique of data entry. However, while the user-interface
techniques, cited in (i) to (v) get progressively easier for the end-user, they
unfortunately become progressively more complex for the software developer. In other
words, a menu system is more difficult to implement than simple lines of text entry, a
pull-down menu system is more difficult than a simple menu and a windows based
menu system is much more difficult again. Although windows environments provide
the tools and low-level routines that enable developers to use pull-down menus, file and
window handling procedures, the scope of the tools is, by necessity, enormous because
they can be used in so many different ways. The learning curve for software
developers is therefore considerably larger than it was for older user interfaces.
Data entry is of course only one side of the user interface and data output is the
other side. It has long been established that displaying countless numbers on screens is
quite ineffective, particularly when those numbers represent entities such as system
state and the state is continually changing. Graphical representations that enable
system users to relate to numerical quantities are naturally the preferred option.
However, given limited screen resolutions, graphical representations (animations) are
generally only an approximation of the actual system behaviour. Quite often, these
need to be supplemented with important numerical quantities or alarms that alert
system users to specific conditions that may go unnoticed in approximate displays. The
widespread acceptance of windows operating systems has led to a large number of
third-party software houses developing scientific and engineering software tools to
supplement the basic user interface input/output tools provided by the operating system.
Typically, the additional tools provided include the ability to display animated meters
(instruments), graphs, warning lights, gauges, etc. - in fact, a graphical simulation of
typical industrial devices that help the user to identify with various quantities.
In the final analysis, the development of user interfaces is really a closed-loop,
iterative process. It requires a great deal of interaction between the developers and
untrained end-users in order to observe how a basic interface can be changed or
enhanced to improve performance.

332

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

8.4 Programming Language Issues - OOP


There has been much debate, since the 1960s, regarding the various options in
programming languages. Computer scientists are continually arguing the merits of new
languages and the benefits of "C" over "Pascal" or "Fortran". From an engineering
perspective, we need to divorce ourselves from these arguments because in a larger
sense, they are trivialising the main objective of computer programming - that is, to
create an operational piece of software that:

Will reliably and predictably perform a required function


Can be easily read and understood by a range of people
Can be readily modified or upgraded because of its structure and
modularity.

If we refer back to Figure 8.3, we can see that for a general mechatronic control
application, there are several levels of software that need to be written:

The interface between the hardware (interfacing card) and the main
application - the I/O routines
The user interface
The control algorithm.

This leaves us with the problem of deciding upon various levels of programming and
possibly, programming languages.
The software that couples the hardware interface card to the main application is
one most likely to be written in an assembly language, native to the particular computer
processor upon which the system is based. Traditionally, most time critical routines
(normally I/O) were written in an assembly language to maximise performance.
Additionally, many routines that directly accessed system hardware (memory locations,
etc.) were also coded in assembler. However, there are two reasons why many
developers many not need to resort to assembly language programming. Firstly, most
modern compilers are extremely efficient in converting the high-level source code
down to an optimised machine code and there is generally little scope for improving on
this performance by manually coding in assembly language. Secondly, many
manufacturers of I/O and interfacing boards tend to do much of the low level work for
the developer and provide a number of highly efficient procedures and functions that
interface their hardware to high level language compilers such as Pascal and C.
Given that the choice of interface software has largely been determined by the
board manufacturer, a system developer is still left with the problem of selecting a high
level language for implementation of the control algorithm and user interface. Contrary
to the opinions of many computer scientists, from an engineering perspective, the
choice of a high level language is largely irrelevant and is best decided on a political
basis rather than a technical basis - in other words, issues such as:

Software Development Issues

333

Market penetration of the language


Industry acceptance
Compatibility with common windows operating systems
Availability of third-party development tools
Availability of skilled programmers

are far more important than the actual syntax differences between Basic, C, Fortran and
Pascal. In fact, viewing the process with an open mind, one would probably find that
most modern compilers satisfy the above criteria.
In the 1970s and 1980s there was much ado about the deficiencies of the Basic
programming language. Many of these were valid because the language at that time
was largely unstructured (ie: was based on GOTO statements) and was often interpreted
(converted to machine code line by line) rather than compiled (converted in its entirety
to an executable binary code). This meant that Basic was very slow and programs
developed in the language were often untidy and difficult to maintain. However,
modern versions of Basic contain the same structures as the other languages, including
records, objects, dynamic data structures, etc. Provided that one has the discipline to
write structured, modular code, with no procedure, function or main program greater
than the rule-of-thumb "30 lines" in length, then the differences between modern Basic
and C become largely syntactic.
Fortran is another language that appeared to be in its final days in the 1980s but
has since had a recovery. Fortran was originally the traditional programming language
for engineers because of its ability to handle complex numbers and to provide a wide
range of mathematical functions. An enormous number of engineering packages were
developed in Fortran, particularly older control systems, finite element analysis
packages and so on. When these advantages disappeared as a result of the large
number of third-party development tools for other languages, it seemed as though C
would become the dominant new language in the 1980s. However, the availability of
Fortran compilers, complete with tool-boxes for the windows environments has slowed
the conversion of older programs from Fortran to C and has encouraged continued
development in Fortran.
Pascal, originally considered to be a language for teaching structured
programming, came into widespread use in the 1980s as a result of the Borland
International release of Turbo Pascal. This low cost development tool sparked an
enormous range of third-party development packages and particularly encouraged
interface manufacturers to provide Turbo Pascal code for their hardware. Again, as a
result of windows development tools for the language, it is evident that it will remain
viable for some years to come.

334

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The C programming language, most commonly favoured by computer scientists


as a result of its nexus with operating systems such as UNIX, has also become a
dominant, professional high-level language.
Although structurally no more
sophisticated than most modern implementations of the other languages, it has become
the preferred language of software development houses and as a result, is supported by
a large range of third-party software. Many hardware manufacturers provide C-level
code support for their products and since C is the basis of modern windows
environments it is also extensively supported by windows development tools.
Object-Oriented Programming (or OOP) based software development has become
a major issue in recent years. It should not be regarded as a major change to
programming techniques, but rather, a logical extension of structured programming
languages. Most modern compilers in Pascal, Fortran and Basic incorporate objects in
their structure. The C language incorporating objects is generally referred to as C++.
An object is really just an extension of the concept of a record, where a group of
variables (fields) are combined under a common umbrella variable. For example, using
Pascal syntax, the following record variable can be defined:
Patient_Details = Record
Name:
Address:
Phone:
Age:
End {Record};

String [12];
String [25];
String [7];
Integer;

A variable of type Patient_Details then contains the fields of Name, Address, Phone
and Age which can either be accessed individually or as a record group.
An object combines a group of variables and the functions and procedures (subroutines) that handle those variables. For example:
Patient_Details = Object
Name:
String [12];
Address: String [25];
Phone:
String [7];
Age:
Integer;
Procedure Enter_Name;
Procedure Enter_Age;
Procedure Write_Name;
Procedure Write_Age;

:
End {Object};

Software Development Issues

335

Although the concept of combining variables with the functions and procedures
that handle those variables may not seem to be of importance, it becomes far more
significant because objects are permitted to inherit all the characteristics of previous
objects and to over-write particular procedures and functions:
Special_Patient_Details = Object (Patient_Details)
History: String [100];
Procedure Enter_Name;
Procedure Enter_History;

:
End {Object};
In the above piece of code, variables of the type "Special_Patient_Details" inherit
all the fields of variables of type "Patient_Details" - moreover, they have an additional
field (History) and procedure (Enter_History) added and a new procedure
(Enter_Name) which over-writes the other procedure of the same name.
Most modern programming is based upon the principles of OOP - particularly for
windows environments. All the basic characteristics of the windows environments,
including file handling, window and screen handling, etc. are provided to developers as
objects. The developer's task is to write programs where the basic attributes are
inherited and relevant procedures and functions over-written to achieve a specific task.
The problem with the concept is that there are so many objects, variables and
procedures provided in the windows environment that it is difficult to know where to
begin and hence the learning curve is much longer than for older forms of programming
- however, the end-results should be far more professional in terms of the user interface
and in the long-term, programmers can devote more time to the task at hand rather than
the user interface.
The other major programming difficulty that people tend to find with the
windows environments is the event-driven nature of such environments. This presents
a significant departure from the traditional form of programming. In windows
environments, any number of events should trigger some routine in a program - for
example, the pressing of a mouse button, the pressing of a key on the keyboard, the
arrival of a message from the network, etc. In essence then, the task of developing a
program for a windows environment dissolves into "WHEN" type programming. In
other words, we need to develop a procedure for "when the left mouse button is
pressed" and "when a keyboard button is pressed" and so on and so forth. This is not
as easy a task as one might imagine, particularly given the number of events that can
occur and also because of the general complexity of the environment in terms of the
number of objects and variables. Consider, for example, how many events can occur at
any instant for the environment of Figure 8.5 - and these events are only for the user
interface. In a control system one also has to deal with the dynamic interaction with
hardware interfaces.

336

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

It appears that both OOP and windows environments based on OOP will remain
as a major entity in computing throughout the 1990s and so the issues of development
complexity must be tackled in modern mechatronic control systems. While the
windows environments have provided considerable benefits for software houses
developing commercial programs, they have (because of the extended learning curves)
also provided new problems for engineers that only need to develop specialised
programs on an infrequent basis.
It is evident that many modern compilers, even those with OOP, have still made
the software development task somewhat difficult because there have been few
attempts at simplifying the problems associated in dealing with windows environments.
However, it is also apparent that newer compilers will address the shortcomings of the
windows development process for infrequent programmers. In particular, the most
recent trend in compilers has been to provide a much simpler interface between the
infrequent programmer and the complexity of the windows environment. This has
important ramifications for engineering users who are unlikely to exploit more than a
small percentage of the functionality of a windows environment, as opposed to the
general-purpose software developers that may require the broad spectrum of functions.

Software Development Issues

337

8.5 Software Engines


Most professionals prefer not to "reinvent the wheel" when they design systems
or products because re-invention often leads to the resolution of numerous problems
that are better solved (or have already been solved) elsewhere. However, in terms of
software, it is interesting to note that many designers have not recognised the
development of software "wheels" that can be better used than traditional high-levellanguage compilers.
Most modern, windows-based applications, including spreadsheets, databases,
word-processors, CAD systems, etc. share a common object-oriented structure that
leaves open the possibility of modifying their appearance to achieve some specific
objective. In other words, we may be able to modify a spreadsheet program, for
example, so that it can act as a control system. We can do so by changing the userinterface of the application and by changing the direction of data flow.
There are two ways in which modern applications can be restructured to create
new applications:

Through software developed in a traditional high-level-language compatible


with the object-oriented nature of the original application

Through the use of embedded or macro languages built into the application
itself.

This means that modern general-purpose applications, such as spreadsheets, can no


longer be considered as an end in themselves, but also as "software engines" that can
be used to achieve some other objective. The ramifications of this are quite substantial,
particularly because modern spreadsheet and database programs already have
considerable graphics, drawing, animation and networking capabilities that can be
harnessed and adapted rather than re-invented. In the case of databases there is also the
issue of access to a range of other common database formats that has already been
resolved.
The question that designers of modern mechatronic control systems need to ask
themselves is no longer simply one of:
Which high-level-language should be applied to achieve our objectives?
but rather
Should one first look at the possibility of achieving the objective with a modified
spreadsheet or database package before looking at total development through the
use of high-level languages?

338

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The answer to the latter question is not always intuitively obvious. For all the
benefits involved in modifying an existing "engine", there are also shortcomings, most
notably, the fact that the performance of such a system may not be compatible with
real-time requirements. On the other hand, if one examines the rapid escalation of
processing power and the rapidly diminishing cost of both the processing power and
peripheral devices (memory, disk storage, etc.), one is left with the conclusion that in
the long-term, the use of software engines in a range of engineering applications will
become significant.
Another shortcoming of the software engine approach is that it does not
necessarily diminish the need for the developer (or more appropriately, modifier) to
understand the intricacies of both the operating system in which the application runs
and the programming language which can be used to adapt the original software engine.
On the contrary, those contemplating the use of software engines may well need to be
more familiar with the entire software picture than those developing applications from
first principles.

Software Development Issues

339

8.6 Specialised Development Systems


For many years now, computer scientists have debated about the way in which
software should be developed. The argument has not only been in regard to the type of
high level language that should be used but also about the very nature of programming.
Most high level languages are little more than one step away from the assembly
language or machine code instruction set that physically binds the Von Neumann
hardware architecture, of most modern processors, to the software that end-users would
recognise. Computer scientists have argued as to whether programming should be
undertaken at a more human level than is provided by most high level languages - in
other words, should programming be further divorced from the hardware architecture of
the modern computer?
In order to make computers appear to be more human, it is clear that a higher
level of hardware is required in order to create an environment in which the more
sophisticated (human) software can reside. The reality is that the "friendlier" the
software development/application platform, the more overheads are imposed on the
hardware. However, when we consider that the cost of hardware has been decreasing
for almost four decades, it is clear that there is scope for improving software
development techniques.
There are two techniques which have received a considerable amount of exposure
in international research journals since the 1980s. These are:

Artificial Intelligence (AI) languages / Expert System shells


Neural Network development systems.

Both of these development strategies have tried to emulate some of the human thought
processes and brain functions. However, both also suffer from the same problem, born
from the adage:
"Be careful what you wish for - you may get your wish"
Novices in the field of computing always have problems in learning programming and
wish for better development tools because they think that the computer is too complex.
However, as we have seen, the computer is not at all complex, relative to humans, and
its logic processes are incredibly simplistic. The problem in coming to terms with
computing is that the uninitiated view the computer as some form of electronic brain,
with an intelligence level similar to that of a human. Few people however, realise the
complexities and, more importantly, the inconsistencies and anomalies in the human
thought process. If one therefore wishes to have software that emulates human thought
processes then one must also be prepared to endure the anomalies and inconsistencies
of the process.

340

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

AI programming languages, such as LISP and PROLOG, have been in existence


for many years and their purpose has been to develop "expert" control and advisory
systems that can come up with multiple solutions to problems. However, the
programming languages themselves are a problem. The flexibility that they allow in
program structure, variable definitions, etc., makes them difficult to debug and
impractical to maintain because of the bizarre programming syntax. Over and above
these serious shortcomings is the fact that they impose considerable overheads on even
the simplest programs. For these reasons, such languages lost credibility in the late
1980s, with many expert system developers opting for traditional languages which
provided "maintainable" software solutions to problems. As a result of the
shortcomings of the AI languages, a good deal of research and development was
applied to making expert system shells, in which "intelligent" software could readily be
developed. These too, impose considerable overheads for somewhat limited benefits
over traditional development techniques. Their performance generally makes them
unsuitable for most real-time control functions.
Expert systems are often said to "learn" as they progress. In engineering terms,
this simply means that expert systems are composed of a database with sophisticated
access and storage methods. Overall however, the difficulty with any system that
"learns" and then provides intelligent solutions to problems is whether or not we can
trust a device, with a complex thought process, to determine how we carry out some
function. As we already know, it is difficult enough to entrust a control or advisory
device programmed in a simplistic and deterministic language (such as C or Fortran or
Pascal), much less one that is less predictable in the software sense.
Neural networks, on the other hand, have had a more promising start in the
programming world. The basic premise of neural networks is that it should not be
necessary to understand the mathematical complexity of a system in order to control its
behaviour. Neural networks can provide a useful "black-box" approach to control. The
network (software) has a number of inputs and outputs. The software is "trained" by
developers providing it with the required outputs for a given set of inputs. This can be
a complex task and the first "rough-cut" training may not provide the desired outcomes.
An iterative training process is used so that the system can ultimately provide a reliable
control performance. The difficulty with these systems is normally in determining and
characterising the inputs and outputs in terms of the neural network requirements.
Neural networks have already been successfully applied in a range of different
control environments where a direct high-level-language-algorithmic approach is
difficult. Examples include hand-written-character-recognition systems for postal
sorting, high-speed spray-gun positioning control systems for road marking and so on.
In fact, any control system which is difficult to categorise mathematically or
algorithmically can be considered for neural network application. Another advantage
of most neural network software packages is that they ultimately "de-compile" to
provide Pascal or C source code that can then be efficiently applied without a
development shell.

Vous aimerez peut-être aussi