Académique Documents
Professionnel Documents
Culture Documents
1 Introduction
Moi-Tin Chew
School of Engineering and Technology, Massey University, New Zealand,
e-mail: M.Chew@massey.ac.nz
Serge Demidenko
School of Engineering and Technology, Massey University, New Zealand; School of Engineering,
Monash University, Australia-Malaysia
Yoke-Fei Ng
School of Engineering, Monash University, Australia-Malaysia
S.C. Mukhopadhyay, G.S. Gupta (eds.), Smart Sensors and Sensing Technology, 29
c Springer-Verlag Berlin Heidelberg 2008
30 M.-T. Chew et al.
devices. The connected to the PC nodes can send required information to the main
supervising unit, present device data summaries, network activities and control
functionalities (thus displaying a good level of intelligence). Furthermore, each
particular node is programmed to behave independently in the absence of control
instructions through the communication medium. Therefore, if no host PC or Mas-
ter is present in the network, the node still operates as an autonomous system and
manual control is possible through the input switches.
Since each node in the network has local intelligence, the system has no central
point of failure. PowerBus nodes and the network (Fig. 2) can be easily installed and
configured by using the vendor software to facilitate the rapid system implementa-
tion [7].
2 System Hardware
The PowerBus protocol stack (Fig. 3) is a derived subset that is originated from the
OSI 7-layer model, with exclusively the physical, data link, network, transport and
application layers being conformed. These five layers are implemented and hosted
on a single PowerBus U-ChipTM IC. The fact that the entire protocol stack resides in
the flash memory of the U-ChipTM and can be upgraded in the field over the power
line, ensures that a PB device can be upgraded to the latest released firmware easily
at any time.
PowerBus also embeds security features (such as 80-bit encryption keys) and
can access the Internet through the OPC or OSGi drivers since every chip has a
unique, worldwide MAC address, which is useful for addressing PB nodes over the
Internet [7].
32 M.-T. Chew et al.
product) ensure the required robust communication in the harsh power line
environments [13]. In order to increase resistance to disturbances and interruptions
inherent to electrical installations, PB employs the proprietary narrowband single-
carrier modulation scheme of RYSK (Reduced Yaw Shift Keying) enabling reliable
data transmission [14].
A PB network constituted by a set of intelligent nodes is responsible for the co-
ordination, control and management of PB-enabled appliances functions. It also
provides facilities for the appliances and system components to exchange data mes-
sages and functional requests. The PB protocol can be exploited for either master-
slave or peer-to-peer network topologies.
This presented system employs the master-slave topology as it is aimed for com-
munications from top to bottom or vice versa (whereas the peer-to-peer topology is
used for multidirectional communications between all devices of the network) [7].
In the implemented network architecture, the slaves (PowerCardTM Modules) are
the only agents that access the electrical appliances directly and execute the control
operations locally. For remote control the slaves will broadcast their available ser-
vices and will wait for the service request message to be generated from the master
in the PB network. When they receive a request of service, they will execute a list
of actions to achieve the required task [10]. In general, for the master-slave con-
figuration, a particular slave is only concerned with the events in its immediate lo-
cale. Therefore a slave is rarely required to communicate with the neighbouring PB
nodes. This greatly reduces the required bandwidth of the communication system.
The master (PowerGateTM Modem) is responsible for the network coordination,
maintain registered information of all the slaves, remote control function, initiate
and route messages between the networked nodes [15]. This reduces the level of
encapsulation of the networked nodes since the master will organize and schedule
the communications with the other nodes.
3 Software Environment
framework, through which the local PB devices may communicate with, and con-
trol objects linked to other remote PB devices. The V-Logic functionality can only
be implemented through PowerCardTM module or any slave devices with embedded
U-541 IC.
The V-Logic uses a particular syntax whereby each I/O objects or control param-
eters of a PB device has a set of properties associated with it, known as Virtual Vari-
able or V2 . These V2 s can represent data in a variety of formats, such as Boolean,
32-bit floating point, unsigned char, signed integer and the like. The named condi-
tion or the enumerated representation of the V2 completely defines the I/O object
in question and they can be Read-Only, Write-Only or Read/Write. A PB device
is implemented with such a collection of V2 and it is operated upon the V-Logic
scenarios. In essence V2 s are the logical representation of the underlying physical
processes, of which the control network nodes drive or measure. The virtual variable
interprets the low level device data to a high level meaningful descriptive context de-
scribing the network environment in the V-Logic script. If an input and an output
V2 are defined for a specific application, they could be bounded and associated to-
gether in the V-Logic script and linked to represent a variety of PB devices. This
is achieved through a bi-directional mapping of V2 entities, possible by virtue of
object-oriented methodology inherent in the V-Logic specification [7, 9].
Each PB node or PB device can be associated with one or more V2 s. Each V2
represents a mere single datum of the appliance. It can correspond to an aspect of the
physical world, such as the state of a switch, or to be of more abstract nature, such
as a preset temperature set point. In addition the V2 s are required to be specified
and to be mapped to the I/O pins of U-ChipTM . The discovery and the interrogation
of all networked PB nodes to determine the V2 parameters and statuses of each
PB devices of the network are achievable through the use of PowerBus Installation
Tool. The tool oers a simplest platform for monitoring and controlling V2 s. In
terms of program interaction with the users, however, it is rather complicated and
multifarious.
36 M.-T. Chew et al.
To monitor the PowerBus network, the PB device should be viewed as a node, which
can receive control orders from the distant PC via the power line medium as well as
transmit the localized parametric data and device personalities to its locally linked
PC via a serial interface.
The PowerBus Messenger is the human-machine interface application based on
PB Application Programming Interface (API) communication schemes (Fig. 7). It is
the host software tool that executes on the WindowsTM based PC for real-time dis-
playing informational devices parameters or monitoring remote PB devices. This
tailor-made PB API application, extended with the definition parsers and the user
interface generator, becomes the terminal entry point into a PB network, of which
the enumerated service requests messages can be inputted and outputted through
the connected PB device. It can automatically generate the user interfaces accord-
ing to the type of the discovered PB devices as well as the information provided by
the target devices. The software aims at supporting connection and instant access
to various networked PB devices. It oers also a responsive user-to-user messaging
method. It employs the GUI blueprint of the well-known instant messaging soft-
ware the Microsoft MSN Messenger as an implementation platform [16, 17]
The main objective of the PowerBus Messenger software development is to pro-
vide an easy and a simplified access to the high-level functionality of the PB technol-
ogy, while shielding the PB users from the complexities of the low-layer protocols.
PB API is a part of the application layer functional units and it provides access the
PB Devices Properties, reads or writes V2 s and subscribes to networked PB Events.
By using PB API all network activities can easily be monitored and communication
statistics accessed. It enables also to obtain an entire view of the network including
the detailed properties for each PB device, as well as send and/or receive raw data
packets. The PB API message, a string of bytes with a maximum data length of
250 bytes, consists of a Primitive Type Byte followed by other informational bytes.
It carries the instructions of which the PowerBus Messenger application attempts
to match from its categorized lists of service types and procedural responses inter-
pretation databases. If matched, the selected procedures are executed and the API
Primitive Arguments Bytes that followed the Primitive Type Byte will provide the
necessary information to proceed with the execution. The destination of the API
Message can be a PB Node or group of PB Nodes that is identified by the Destina-
tion/Source Address Byte that is assembled in it [9].
To make the messenger to perform and maintain the networked operation ap-
propriately, a minimum of one terminal must exists on the system at all time. This
terminal (Logic Network Manager) is associated with the Master PowerGateTM Mo-
dem. The Logic Network Manager is a dedicated terminal that functions as the Net-
work Administrator providing synchronous API flow between all of the PB nodes,
categorize PB devices or PB API services and executes compulsory mechanisms to
retrieve all the information of its master-slave network such as the PB node address,
subnet address, device list, URL, look-up table, firmware version and the like, as all
these information is crucial for PowerBus Messenger users.
The PB software routines can be subdivided into six major parts, specifically:
System Initialization, API Data Acquisition, API Data Analysis Process, API Data
Storage, API Results Display and API Data Read/Write. When the PowerBus Mes-
senger starts to execute, it sends a various PB API instructions to initialize the
parameters of PB nodes. Then it controls the PB nodes to sample their existing
U-ChipTM integrated-appliances status through the V2 s values. The PB nodes trans-
mit the resultant information to the PC via the RS232 port, or broadcast it through
the power line medium to other remote PC-based PowerBus Messenger users. At the
PC side, the information is stored in the Microsoft Access database or the messenger
programs static array variables for the need of inquisition in the upcoming execu-
tion. Graphical user interface is then dynamically generated according to the ac-
quired services description embedded in the target PB devices. The users could then
control PB devices plugged on the power line and get the status feedback [17, 18].
Initially, PowerBus Messenger checks up the Device Properties (Fig. 9) of the
locally attached PB Device (master or slave) via the RS232 port. The information
of the PowerBus Protocol Layered Structures programmed in the local embedded
U-ChipTM IC is retrieved by executing dierent GET and SET PB API Request
Instructions to read the complete Device Property Tables sequentially. Next, the
gathered information is saved in the Static Arrays Variables, which serve as an in-
termediate program database, to be used for configuration purposes.
It should be noted that for the master-slave topology, PB API messages are only
transmitted on the PB network when requested by the master [9]. Slaves are never
self-triggered to transmit an API message without first receiving an acknowledged
request from their master.
To cater the dissimilarity in between the master and slave operational properties,
upon the completed analyses on the aforesaid Device Property Tables, the PowerBus
Messenger is able to identify the role of the local PC-connected PB device. Conse-
quently, it triggers then dierent sets of program subroutines for the master and
slave, respectively, to carry out the appropriate functions. Without the presence of
the master in the PB network, the slave can merely control its locally connected ex-
ternal appliances through the PowerBus Messenger and communication with other
Data Messaging and Control in Sensor-Actuator Networks 39
User Login
Retrieve Master
Device Properties Retrieve Slave
Device Properties
And V2 Data
Retrieve Slave Device
Properties and V2 Data
GUI
Activation
Broadcasts
Acquired Network
Info to Slaves
Incoming Outgoing
Service Service GET USER
Requests Requests SET DEFINED
PB nodes or slaves is not achievable by any means. Thus, the instant messaging
feature is disabled when the master is absent in the network. For the master the
management and diagnosis services, which are organized by PB Messenger soft-
ware routines entirely, include querying the content type of the V2 s, the PB Nodes
status, querying and updating addressing information, PB device identification, con-
figuring routers and the like.
All the devices in the PB network share a common power line communication
channel. Therefore identical Logic Network (04h) and Subnet (03h) addresses are
allocated to them. For the PowerBus protocol to propagate the PB API messages to
40 M.-T. Chew et al.
the exact destined node, it needs the PB Node Address (02h) of the destination node.
This address is stored in one of the 56 Device Property Table entries onboard the
master U531-ChipTM the Look-Up Table (7Ah), and is extracted in the previous
initial process catered for the master.
The dynamic network configuration process, executed by the master with accor-
dance to the PB API instructions issued out from the PowerBus Messenger in favour
of address acquisition and distribution service, is to discover and validate other
specific PB nodes in the network. PowerBus Messenger constructs iterative API
instructions according to the number of previous acquired addresses (addresses of
slave nodes) and it broadcasts them on the medium via the master. This caused other
slave PB nodes to transmit the feedback response messages containing their unique
6-byte visible ID (B2h) and their corresponding URL (12h) information (used to
preserve the existing PB users names (who are signed-in and are presently acti-
vating the PowerBus Messenger), and it is used as a pointer/location identifier on
a remote database) back to the master. The entire group of PB nodes on the power
line medium sense the API message at about the same time, each PB node reads the
Destination Address Byte contained in the acquired PB API message. Only the ad-
dressed node decodes it, reads the message contents and carries out the instruction
appropriately, or else passes it up to its hosts PowerBus Messenger directly without
parsing the PB API message. All PB API Messages broadcasted onto the network
are encrypted to ensure security.
Upon the arrival of the response messages originated from other accessible PB
nodes that are present in the network, the PowerBus Messenger updates its pro-
grammed address table entries corresponding to each discovered node iteratively.
After updating the address table entries, PB Messenger ensures that the subsequent
slave device properties, device personalities and their V2 s profile entities are re-
trieved properly from each discovered nodes to make their V2 s available to other
PB nodes through broadcasts as well. Broadcasts are needed to be carried out by the
master to support functions like device or service discovery as well as network syn-
chronization. The V2 s information is then collected and transformed into an enumer-
ated representation suitable for system processing. The V2 s information is retrieved
continuously to reduce the dependences on external, possibly inaccurate out-of-date
database, of which this cited inaccuracy is induced by prolonged program execution
period as the V2 s values may not be constant all the time. Any discovered Slaves/PB
nodes are then integrated as part of the PowerBus Messenger networking system
without any further delay [19, 20].
Once the configuration processes are completed, the GUI elements are gener-
ated and activated to display the acquired network information. PB Messenger GUI
presents the PB network resembling the WindowsTM Resource Explorer. The mes-
senger displays and manages the data items/properties in tree-view (Fig. 10). Data
items are classified as PB nodes and their controllable V2 s. The GUI merely shows
the PB Messenger user a list of discovered PB nodes that are being registered in the
configuration query process. The GUI representation is as simple as possible, but
powerful enough to describe the essential features of a discovered PB device and
interact with it.
Data Messaging and Control in Sensor-Actuator Networks 41
Fig. 10 Users sign-in form and the main form of the PowerBus Messenger
PB Messenger users can make use of the accessible textual messaging feature
to deliver short-text notes to other available remote PB users (Fig. 11). The maxi-
mum length of ASCII texts that could be delivered at once is 250 characters/bytes
(without overhead). Messages up to 32 bytes are contained in a single PowerBus
packet. Messages longer than 32 bytes are divided into multiple PB packets using
the segmentation service of the application layer that supports up to 250 bytes. A
PowerBus message can theoretically be as long as 1.8Mbytes.
Users have full access to the individual V2 s of each networked PB devices. In
addition when Direct-Network Control function is enabled (manual override), it al-
lows the PB users to change the state of certain V2 s (Read/Write) or monitor the
values of other V2 s (Read-Only). Controlling the V2 s via the PB Messenger in-
vokes the hardware-level responses, and the physical PB devices in respond to the
generated PB API instructions carry out the tasks accordingly. If the PB Messen-
ger user disables the Direct-Network Control function, the slave can carry on with
its pre-programmed V-Logic scripts and act as a stand-alone PB device to execute
developed scenarios [14].
42 M.-T. Chew et al.
6 Structure of PB Messenger
way, as well as to control the prioritized execution of these tasks. Examples of such
tasks include timer events activating per-defined duties on expiration, run-time li-
brary modules of function calls to carry out events checking, manage programs in-
put/output activities, send and receive API messages across the network, to control
miscellaneous functions of the U-ChipTM and so forth. Table 1 summarizes major
program routines that are employed in PowerBus Messenger.
Polling method is used to retrieve the required data from the PB network and
to interrogate each PB node about its status remotely or locally by PowerBus Mes-
senger. Although polling method has the benefit of no address tables entries of the
nodes on the PB network need to be modified frequently, nonetheless, every PB
node has to be polled individually (requests and responses) causing a considerable
amount of unnecessary network trac and driving the PB Messenger to enter an
exhaustively execution stage. Therefore, an extended delay interval in between each
polling request instruction is adopted to relieve the trac and to allow the program
to have sucient time to deal with the other events.
Despite the robustness of the PB technology, potential errors could still impact
on the network performance. Such errors could generate increasing network traf-
fic, incomplete packets transmission or even excessive usage of the IACK (interrupt
acknowledge) call-backs. Insucient delay time between the API packets transmis-
sions could further prop up the aforesaid errors. Therefore careful diagnostics of
network performance is required to avoid overload in network channels, prolonged
transaction times and repetitive unstable processes [21, 22].
To achieve the objective, each critical communication error detected by PB de-
vice or distinguished by the PowerBus Messenger is prompted to notify the PB
users through dierent types of alert dialog boxes. The time-stamped detailed error
descriptions are logged for further diagnosis. By doing this, the potential bottle-
necks and instabilities in the PB network can be identified. It allows also carrying
out detailed investigation of the abnormal system behaviour. As a result faults can
then be detected, localized and corrected at an early stage with minimum eorts.
This is particularly important when the access to the PB nodes is oered through a
far-o remote connection.
7 Conclusion
This paper presents implementation of a simple power line control and communica-
tion system that satisfies the fundamental dependability requirements. The system
is built on the PowerBus technology enabling implementation of the large-scale net-
work. It provides a good practical balance of flexibility, transparency, and expand-
ability. The system can employ dierent physical media and network topologies
that are supported by the PowerBus standard. In addition to the standard control
functions earlier developed for the PowerBus technology, the system is capable of
providing such functions as text messaging, file transfer, automated meter reading,
audio control network applications and a number of others.
References