Vous êtes sur la page 1sur 22


Journal of Visual Languages & Computing


Journal of Visual Languages and Computing 17 (2006) 584605

Modeling real-time communication systems: Practices and experiences in Motorola

Michael Jiang, Michael Groble, Andrij Neczwid, Allan Willey
Software and System Engineering Research Laboratory, Motorola, Inc., 1303 E. Algonquin Rd., ANX2B2, Schaumburg, IL 60196, USA

Abstract Visual modeling languages and techniques have been increasingly adopted for software specication, design, development, and testing. With the major improvements of UML 2.0 and tools support, visual modeling technologies have signicant potential for simplifying design, facilitating collaborations, and reducing development cost. In this paper, we describe our practices and experiences of applying visual modeling techniques to the design and development of real-time wireless communication systems within Motorola. A model-driven engineering approach of integrating visual modeling with development and validation is described. Results, issues, and our viewpoints are also discussed. r 2006 Elsevier Ltd. All rights reserved.
Keywords: UML modeling; SDL modeling; MDE code generation; Model validation; Real-time communication systems; TTCN; Structured methods

1. Introduction Visual modeling languages and techniques use graphical notations for the specication, description, development, and validation of software systems. It is widely believed that visual modeling helps to raise the level of design abstraction. Communication and collaboration are facilitated through the visualization of system structures and behaviors. With formal denitions of modeling constructs, visual modeling also has the potential to
Corresponding author. Tel.: +1 847 576 1918; fax: +1 847 576 3280.

E-mail address: Michael.jiang@Motorola.com (M. Jiang). 1045-926X/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.jvlc.2006.10.003

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 585

precisely dene the model of a software system, validate the model, and generate executable code for the software system. Various visual modeling techniques have been used for software specication and development. Entity Relationship diagrams are commonly used to model database systems [1]. Specication and Description Language (SDL) is a visual modeling language dened by the ITU for the modeling of telecommunication systems. Real-Time ObjectOriented Modeling (ROOM) is another modeling language widely used in the telecommunications industry to develop real-time software [2]. Domain-specic visual modeling languages are also used for describing the concepts, structures, and behaviors of applications in well-dened domains [3]. The most-used visual modeling language is the Unied Modeling Language (UML) standardized by the Object Management Group (OMG). Almost 20 years ago, visual modeling techniques were adopted for product development within Motorola in an attempt to reduce development costs, improve product quality, and manage the increasing complexity of software systems. The rst attempt was the adoption of a requirements specication model based on Structured Analysis (SA) and Structured Design (SD), in which system requirements are formally represented through an integrated visual requirements specication model [4]. Sequentially, we applied visual modeling technologies to the architecture, design, development, and testing phases of the development life cycle. SDL has been used extensively for the development of complex real-time wireless communication systems for the network infrastructure businesses. SDL is a standard language maintained by the ITU-T (International Telecommunication Union-Telecommunication Standardization Sector) to specify and describe the structures, behaviors, and data of systems, particularly for real-time communicating systems [5]. With extensions to the SDL modeling language and tools to support model design and development with these extensions, SDL models can be validated and production code can be generated from SDL specications for real-time communication systems. Recently, UML version 2.0 [6] has been increasingly adopted by various businesses in Motorola. Migrations to UML 2.0 of models built with previous visual modeling techniques have been investigated to minimize rework and development disruption. Technical training and process guidelines are in place for the adoption of UML 2.0 across the corporation. From these uses of visual modeling languages, we have been able to observe what practices lead to successful product development. The main characteristic of these successful approaches is that they use an integrated model-based code generation approach for product development. We describe a model-driven engineering (MDE) approach that integrates visual modeling technologies across the life cycle of several product development, including requirements modeling, product modeling, simulation, product code generation from models, test generation, and target testing [7]. Considerable effort and investment has been put into the development and integration of tools (both internal and external) that can be applied to different phases of the life cycle. This paper summarizes our experiences in applying visual modeling techniques for the development of wireless real-time communication systems. Sections 24 describe our experiences with Structured Methods, SDL, and UML, respectively. Strengths, weaknesses, and internal enhancements to the various modeling techniques will also

586 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

be described. Lessons learned from the applications of these visual modeling techniques will also be discussed. Section 5 describes the visual modeling approach practiced in Motorola that integrates across the full life cycle. Section 6 gives a summary of other related approaches to integrated modeling. We conclude this paper with results and viewpoints on visual modeling techniques for complex real-time communication systems.

2. Modeling and simulation with Structured Methods Our rst visual modeling effort was the adoption of Structured Methods in which system requirements are formally represented through an integrated requirements specication model. The main objective was to improve product quality and time to market through rigorous requirements specication and validation. The requirements specication model dened in Structured Methods represents what has to be designed, not how to design it [8]. The key components of a requirements model are the data ow diagram (DFD) and the control ow diagrams (CFD). These diagrams can be decomposed progressively to represent lower-level details of the system, subsystems, or system components, with the process specication (PSPEC) as the lowest level of the hierarchical decomposition. A DFD partitions functional requirements into process functions and represents the system as a network of processes interconnected by input and output data ows. A process transforms its input data ows into output data ows. The control specications are represented using nite state machines. The data context diagram (DCD) and the control context diagram (CCD) represent the environment external to the system through which interactions with the system take place. Control specications describe the status of processes and indicate what processes are enabled or disabled, created and destroyed, during the course of data processing. They are represented as nite state machines, usually in tabular format. An advantage to using a tabular format instead of the more traditional graphical state-and-transition format is that it is easier to make sure you have full coverage of all possible inputs across all possible states in your state machine. This is an example where proper visualization representation adds value to the development environment. Process specications are functional primitives that are not further specied formally in the structured methodology. These functional primitives are often expressed as textual descriptions, mathematical equations, ow charts, tables, etc., that provide enough necessary descriptive details for design and implementation of the processes. Structured Methods was well received by Motorola engineers especially in the earlier stages of system analysis. Its visual paradigm provides a high-level view of the ow of data and control throughout a system; this type of overview of system functionality is difcult to extract when reading simple textual descriptions or programs. As shown in Fig. 1, the visualization of operations, data ows, and data storage simplies the understanding and communication of payment functions and requirements for a sample vending machine. The hierarchical decomposition of data ow and control ow diagrams also provides a well-dened solution to model complex systems. One of the key drawbacks of Structured Methods is the inability to formally validate the requirements specication. The informal process specications make it unfeasible to perform analysis and validation on the requirements model.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 587

<new> Coin_Counts

<current> Coin_Counts

Add Payment To Change Clear_ Control

<new> Payment_Coins

<current> Coins_cleared <current> Payment_Coins Payment_Coins

Coins_cleared <current> Return Payment Payment_Coins


<new> Returned_Coins Payment_Coins

<current> Payment_Coins

Fig. 1. A DFS modeling payment for a vending machine.

2.1. Formal process specication To formalize the specication of processes for model validation, we designed an proprietary process specication language to replace the informal process specication dened in Structured Methods. The ExSpec (Executable Specications) system, a CASE tool that supports requirements specication and execution of the requirements model based on Structured Methods, was developed internally to support requirements analysis and validation. In ExSpec, the informal PSPEC process specication is replaced by formal constructs to operate on input and output data and control ows, such as sending information between processes using an output data ow specied in higher level DFD diagrams. Both data ows and control ows among processes can be manipulated and simulated with the formal specication of PSPEC. Similar operations are also provided for operations on data stores. Control statements enable modelers to specify relatively high-level algorithms, such as loops and conditions, to act upon data from input data ows and information stored in data stores dened in the requirements model. The main purpose of these extensions is to dene what functions the PSPEC processes can perform without describing how they are implemented. A small set of data types and their operations are dened for PSPECs so engineers can focus on requirements specication and validation instead of low-level algorithm development. This limited

588 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

number of data types and their operations, however, should be sufcient for the simulation of the requirements model, and indeed our experiences with model simulation showed that only basic data types are required to support requirements validation. 2.2. Requirements validation With the formalization of process specications, the requirements model can be described using formal semantics. ExSpec was developed to support the specication and execution of a requirements model based on the extended Structured Methods. An environment external to the requirements model was created to simulate the requirements model using input and output data and control ows between the environment and the system. The requirements model is treated as a black box in the ExSpec simulation environment. The execution of the requirements model is driven by the data and control ows into the simulation model. The validation of requirements and external behaviors is performed by analyzing the data ows from the simulation model. The simulator also permits the inspection of processes and process status. The scenarios related to the interactions of processes for any given input stimuli can be veried to ensure the behaviors of the system meet the functional requirements. Non-functional requirements, however, are not validated by the ExSpec system. 2.3. Lessons learned The use of ExSpec for requirements specication, analysis, and validation in various Motorola product groups resulted in signicant improvement in both product quality and time to market. The visual aspect of Structured Methods allows analysts to easily understand the functional allocation and interaction among various subsystems; it provides a graphical view of the software architecture. The hierarchical decomposition of the data ow and control ow diagrams helps manage system complexity. The formal extensions to process specications, along with tool support; allow validation of the system requirements model. By modeling and validating system requirements up front, product teams were able to reduce or eliminate ambiguities in system requirements. Product defects due to misinterpretation of product requirements were also reduced, avoiding most of the unnecessary downstream development that are costly and time-consuming. Through the rollout of ExSpec for requirements specication and validation, we learned that further improvement on productivity could be achieved by integrating the formal requirements specication of Structured Methods with downstream development activities. It is apparent that any automation in the transition of the requirements model, in part or as a whole, to lower-level design and implementation will potentially accelerate product design, development, and testing. Inconsistencies among development phases can also be reduced by automating the transition between phases. Structured Methods, however, lack the formal specication of data and algorithm components to support the integration with down-stream phases. Although extensions to Structured Methods can help ease parts of the transition, it is difcult to transform the high-level requirements specication to lower-level designs for complex systems. Consequently, Structured Methods was abandoned in favor of SDL as discussed in the next section.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 589

3. Modeling, simulation, and code generation with SDL We started to investigate the use of SDL in the early 1990s when we participated in the signaling standards work for the development of Terrestrial Trunked Radio (TETRA) protocols. At that time the standards committee had decided to formally capture the externally viewed protocol scenarios using Message Sequence Charts (MSCs) [9]. These MSC-based scenarios would form the basis for a validation suite, which could be used to determine whether any TETRA implementation was behaving correctly. In addition, an abstract functional description showing a possible TETRA interface implementation was created using SDL and also supplied as part of the TETRA standard. Our investigation showed that SDL and MSC allowed a more formal and precise description of models compared to Structured Methods. MSC provides a formal and intuitive visual representation of interactions between objects in the system. This makes them particularly useful when capturing and reviewing system requirements. The formal and precise description of system model behavior when written in SDL makes it simpler to transition to lower-level development and to generate production code from system models. The SDL specication denes a system model based on the concept of communicating extended nite state machines [10]. SDL provides constructs that facilitate the specication of large and complex systems through partitioning and hierarchical decomposition. SDL is also a formal object-oriented language, allowing a modularized and encapsulated design and implementation using the concepts of types, instances, and specialization. Some of the key components are systems, blocks, processes, and procedures. A system consists of interrelated building blocks, which can be hierarchically decomposed to lower-level blocks or processes. Processes communicate with each other with signals through established routes and channels. The SDL standard also contains data denitions that include predened data types and allows user-dened data types. SDL-92, the rst version we adopted for visual modeling, has limited support of data types and little support of algorithm specication. Motorola augmented this SDL standard with extensions to data types and algorithm constructs. An extended SDL compiler and runtime system supporting this augmented SDL were developed in Motorola for the simulation and validation of SDL specication models. Auto-code generators for SDL models were also developed in Motorola to generate low-level code from SDL specication. Automated test generation tools have also been developed to support the simulation of SDL models and testing of target systems generated from SDL models. After years of experiences with internal extensions, Motorola worked with the SDL standards organization (ITU-T) to incorporate these extensions into a newer version of the SDL standard, and with SDL tool vendors to support these extensions. Today, Motorola product groups primarily use SDL-96, with some constructs from SDL-2000 that are supported by tool vendors. 3.1. Extensions to SDL Though much of SDL is graphical in nature, there are items that need to be specied textually, such as data types, variables, signals, signal lists, etc. We also found that the graphical form of SDL is unwieldy when trying to describe advanced algorithms. It was determined that capturing some functionality in textual form provided a concise way of

590 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

describing common algorithms. It made them easier to read, more comprehensible, and more maintainable. Textual forms also resulted in signicant savings in diagram size (page real estate)a valuable improvement since some of the state machines being generated would spread out over several pages, making them very difcult to review and defeating one of the main strengths of a visual modeling paradigm. The textual algorithms are most useful when expressing medium-level data manipulation from a control-ow viewpoint, primarily to provide a more compact version of bulky graphical combinations such as loops (especially nested loops) and decisions (especially nested decisions with many branches with little data manipulation on each branch). We also introduced the concept of safe pointers in the form of own and reference types. These constructs have the concept of data ownership built into their denition and implementation to support efcient manipulation of data. Data ownership enables the runtime system to determine when instances of data objects are no longer needed and so can be reclaimed in a timely and efcient manner. Finally, we extended data-type denitions and manipulations to simplify the specication of complex packet data structures in lower-level communication protocols. Extensions are made to handle bit and variable bit declarations and manipulations for protocol data units (PDU) used in wireless communication systems. These extensions allow the declaration of packet data structures for handling bit-oriented data. Optional elds and dependencies among data elds can be described and manipulated by these extensions. For example, construct /VARBITSS uses a sequence of /variable_bit_eldS data elds to dene the data structure for a PDU. A eld in this PDU can be absent if the /expressionS is evaluated to false in the construct /IFS. With the specication of packet data types, packet data can be manipulated using the extended constructs. These extensions facilitate the separation of concerns at the application level and the marshalling level. In communication systems, marshalling is the process of coding, encoding, and conversion of data formats. The code for packing PDUs for transmission and un-packing PDUs for applications is an example of marshalling code. At the application level, information contained in the PDU can be extracted and manipulated using extended constructs without needing to know the encoding of the PDU data. With the formal denitions of these extensions, marshalling code can be generated automatically from high-level specications [11,12]. These extensions, along with formal techniques to generate marshalling code from high-level specications, contribute to improvement in code quality and productivity. Portability is also improved since platform-specic and language-specic marshalling code can be generated from the specications.

3.2. Model simulation and code generation In addition to using external SDL tools for product development, an SDL runtime system was initially developed internally to support the validation of SDL models that included our extensions to the standard specication language. This allowed us to prototype the internal extensions for product development. After experimenting with modeling and simulation, we partnered with tool vendors to support these extensions and proposed that these extensions be added to the formal denition of SDL. Most of these

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 591

extensions were added to the SDL standard, and some are now also part of the denition of UML 2.0. An automatic code generation system was developed internally to generate highly optimized product code from the extended SDL models [13]. Since the SDL-based model describes an abstract design, the code generation system needs to take domain-specic and platform-specic optimizations into consideration. The automatic code generation system is a rule-based system containing a collection of tools used to create and implement a custom code generator. The code generation system captures domain and programming knowledge and generates code optimized for specic product platforms. Details about this code generation system can be found in [13]. 3.3. Lessons learned The adoption of SDL for system modeling, coupled with an auto-code generator and an automated test generator, showed signicant success and positive impact in the design and development of complex real-time wireless communication systems in Motorola. The most widespread use is in the network infrastructure businesses where systems are typically large and complex, with high capacity and availability requirements [7]. Results from the deployment of SDL with the integration of auto-code generation tools showed 2.5 10 productivity improvement in various product groups. Fig. 2 shows the average improvement on nding defects early in the development process compared to traditional non-modeling approaches in one of the product lines in wireless network infrastructures. By using SDL, these teams were able to signicantly increase their effectiveness of nding faults early in the development cycle. The label GDR stands for Graphical Design Review. This phase is present in both development processes. In the legacy development processes, graphical models are used to structure and review the architecture and design. Code is developed manually using the models as blueprints. The SDL development processes allowed the teams to execute their architecture and designs rather than just visually review them. As a result, there were many fewer defects traced to these architecture and design phases than the legacy approaches. The other way to state this is that the SDL teams were able to reduce risk much earlier in the development cycle. Fig. 3 shows the average improvement on test effectiveness gained by product groups in the same product line in all phases of product testing, with more signicant improvement
100.00% 80.00% 60.00% 40.00% 20.00% 0.00%
Legacy Baseline SDL

Percentage of total faults found

Fig. 2. Cumulative faults comparison.



Te s

ar ge ti Bo ng x_ Sy Tes ste t Sy m_I nt ste m Po _Te stR st el ea se







ni U




de Te st



rc h


592 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605
100 80

60 40 20 0

Legacy Baseline

Simulation Integration Box testing System System test Post Test Phases integration release

Fig. 3. Test effectiveness.

in the early testing phases. These teams were able to use model-based simulation to discover the majority of application-level logic errors in the simulation test phase. The integration phase therefore shows nearly twice the test effectiveness seen in the legacy baseline. The early simulation testing of application logic, prior to integration on the target platform, allowed more defects to be found earlier in the process. The data shows this benet extended over all test phases. To date Motorola product teams have used SDL to implement at least some functionality in more than 12 elded products. The majority of those have been network elements developed in the network infrastructure businesses. Five of those have over 60% of their functionality implemented in SDL. In addition to these success areas, a number of key lessons have been learned from our experiences with SDL that prevented the widespread adoption in the engineering community.


Platform-independent models are difcult to create due to data complexity issues in wireless real-time communication systems. PDU in wireless communication systems are often complex, with variable length data elds and dependencies among data elds. Data-type constructs in SDL are inadequate for the declaration and manipulation of these packet data units. Modeling skills are not widespread in the engineering workforce, even among those with years of industry experience. Vendor tools have very limited support for complex data specication and manipulation, which is required for wireless real-time communication systems. We have to augment vendor tools to support internal extensions to data specication and manipulations. SDL modeling is not suited for all types of systems and applications. Some algorithms, database queries, etc., may be cumbersome to represent in graphical notations.

4. Experiences with UML SDL was very successful for certain communities within the engineering population, but adoption was not widespread. UML 1.x use was at least as widespread but did not have the same success stories. Motorola participated in the UML 2.0 standardization effort and

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 593

helped ensure the updated standard included support for the concepts that led to SDL successes. With the ratication and adoption of the UML 2.0 standard, vendor support for SDL is expected to decline. Our resources were therefore shifted to help deploy UML 2.0 to wider communities, both those familiar with modeling from either an SDL or UML 1.x background, and those using a code-centric approach. 4.1. Advantage of UML over SDL UML overcomes some of the limitations of SDL, in particular the lack of strong support for requirements engineering and system architecture. SDL constructs primarily deal with specifying the behavior of a system through partitioning and hierarchical decomposition. UML provides similar constructs and additional diagrams that deal with relationships between a user and the system. In particular, they document the user goals, the roles of the user, and typical interactions between the user and the system, and specic actions that occur during these interactions. One of these diagrams is the Use Case Diagram, which is used to capture the externally viewed interactions with a system, using both a graphical notation and an informal language to describe the goals and actions of the user. Additional formal descriptions of system interactions and individual scenarios within use cases can then be provided in greater detail through the use of Sequence Diagrams, Collaboration Diagrams, and Activity Diagrams. Sequence Diagrams emphasize the sequence of messages exchanged among objects in the system, while Collaboration Diagrams primarily use their layout to indicate how objects are statically connected. The goal of both diagrams is to provide a high-level view of how objects collaborate. A principal feature of these diagrams is their simplicitymessage exchange and communication relationships can be easily identied. The Activity Diagram allows a more detailed specication of individual actions that make up a behavior, including the capture of parallel behavior, workows, behavioral dependencies, association of activities with objects, and representation of interactions between use cases. SDL typically uses MSCs for requirements specication. They, however, lack the highlevel and detailed views of user-system interactions, which can be used to capture user requirements. UML has also included constructs and diagrams that help support limited architectural modeling. In particular, the Composite Structure Diagram can be used to capture the structure of a class by specifying its parts (a structural node), their ports, and the connectors between ports. Since each part represents an instantiated class in an architecture, and each of those classes can recursively have its own internal structure, the combination of Class Diagrams and Composite Structure Diagrams supports structured (nested) classes which can be used to represent layered architectures. In addition, the classes can be stereotyped as components, so components and their connections can also be captured in the Composite Structure Diagram. The ports associated with Classes can have associated Interfaces, whose behavior can be captured using a number of different diagrams, including Sequence Diagrams, Collaboration Diagrams, Activity Diagrams, and State Machine Diagrams. Note that activities in UML are used to model various types of ows in the system, both signal and data ows, as well as algorithmic or procedural ows. These generalized ow-based descriptions, which

594 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

are not state-based, can be used at the system level to naturally describe behavior in numerous domains and applications. Of course, these constructs are insufcient to allow UML to be used as a full-blown Architecture Description Language (ADL). However, a strength of UML 2 is that it can be enriched through the use of proles and so provide a domain specic language. One such prole of interest is SysML which introduces several additional stereotypes and their associated attributes, operations, and constraints, all of which help further extend the modeling reach of UML to specifying system architectures. These enhanced architectural and behavioral descriptions make UML models more amenable to detailed design and code generation. 4.2. Case studies Anticipating the need for the migration of a large base of SDL products to UML 2.0, we decided to conduct a pilot in conjunction with one of the product groups. The main goal was to identify how to automate the migration process as much as possible. Instead of simply moving the model source from one environment to another, we also wanted to push the resulting UML model all the way through to generating shippable code using the new UML development environment. This way we would be able to verify how closely the UML environment could mirror the SDL environment, covering the multiple aspects of editing, simulation, debugging, testing, targeting, integration, and eld acceptance. We would also be able to gather comparative metrics to see how the resulting artifacts of the two languages differ in terms of size and speed. In addition to the migration pilot, we modeled and designed a key network platform using UML 2.0. Through these case studies, guidelines have been developed for the rollout of UML 2.0 throughout Motorola. 4.2.1. Case study: migrating SDL to UML Working with a development team at Motorola, we selected a small production SDL model that uses a fair number of SDL constructs. Its smaller size would make it manageable for a small team to quickly make progress through all the development phases and to quickly resolve issues. Because it was already a shipped product, the models behavior had already been validated and test cases previously developed for the product testing could be used to validate the behavior of the resulting UML model. Overall nine staff months were spent on the migration effort, from when we rst started preparing the SDL model for migration to when we delivered a binary executable generated from a UML 2.0 model running on a target platform. We focused rst on generating a UML 2.0 model that would simulate on the Windows modeling platform. The major tasks we targeted for this pilot were as follows:


Migrate model source from SDL format to syntactically correct UML format on the Windows modeling platform. Validate system functionality on the Windows platform. Port to the Solaris product development platform and perform sub-system tests. Port to the Tandem target platform and perform system integration and tests. Run performance tests on the target platform.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 595

Once we had a syntactically and semantically correct UML 2.0 model, we were able to progress quickly. Because the target binary passed all validation and verication tests with no defects, we ended up releasing this subsystem as a patch for product release, and it was deployed for customer acceptance, making this the rst code automatically generated from a UML 2.0 model shipped by Motorola into the eld. There have been no reported eld defects attributed to this binary release. The structure of the migrated UML model is similar to that of the SDL model; processes are mapped onto active classes, but SDL blocks are also mapped onto active classes to capture the hierarchical distribution of the SDL functionality. A preferred way to capture this SDL architectural information would have been using UML Composite Structure Diagrams. The UML binaries are approximately 2 the size of the SDL binaries on the development platform and 21/2 on the target platform. We are not exactly sure of the reason behind the size increase, though a likely explanation is that many more data operator binaries are being included from the vendor support libraries during build time than are actually needed by the model. This could be as a result of the way data denition header les are being translated and incorporated directly into the model as source, instead of simply being used to resolve references within the UML syntax during compilation. The use of different compilers on the development and target platforms may also contribute to the size difference as well. This is still under investigation. The performance of the UML binary is comparable to that of the SDL binary. The UML-generated binary increases CPU utilization on the target platform by less than 1.5% over that of the SDL-generated binary. This makes sense since, because of the tool vendors use of an SDL prole for UML, the UML source is mapped onto SDL constructs during the compilation process, and the state-based functionality, and signal exchange mechanisms are very similar between the two languages. We have learned several lessons from the migration exercise.


Some modeling constructs in SDL are not supported by UML. SDL constructs, such as virtual process-type denitions, service, service type, service-type instances, signal renement, etc., require re-design in UML. SDL is not case-sensitive, allowing a mixture of uppercase and lowercase to represent SDL keywords, literals, variable names, operators, signals, timers, etc. However, UML is a case-sensitive language, so the source SDL model must be transformed into a casecompliant model (consistent use of case for all named entities); otherwise the resulting UML model will be syntactically incorrect since references and denitions may not match, and keywords may not be properly recognized. The SDL model may have embedded lower-level programming language constructs (such as C or C++) through the use of specialized vendor-specic directives. These code fragments may refer to values dened in external header les or use a unique notation to access the variable names in the SDL model itself. Unfortunately, external data denitions need to be translated into UML-compatible source format, and the variable name notation is different in UML, so almost all code fragments must be modied or translated. Interfacing with external routines that are implemented with C and C++ programming languages through header les is difcult. Both SDL and UML are strongly typed, whereas many programming languages are not. Some constructs dened externally may not provide sufcient information for proper typing.

596 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

Targeting multiple platforms is difcult. Trying to maintain various congurations to support builds on multiple target platforms is problematic. The capabilities and options currently available in modeling environments are simplistic and provide only narrow point solutions, which do not combine well into a comprehensive strategy. Tool support for simulation and test is inadequate. We have found that one of the greatest benets of modeling has been the ability to introduce semantics very early into the development process at an abstract level, thus allowing validation of the earliest design artifacts through simulation and test cases. These artifacts eventually evolve into trusted components through an incremental and iterative process of gradual behavior introduction, and are validated along the way. Without automation in the testing environment, it is not cost effective to invest in model migration. Algorithms migrate well, data types do not. The algorithms captured in visual modeling (ow paths with tasks, decision branches and merges, etc.) migrate quite well, along with most textual operational behavior (a fairly common set of data operations). It is challenging to handle data-type incompatibilities.

Drawing on this experience, we have developed a set of guidelines for use by product development teams to migrate SDL models to UML. Our future effort is to investigate how these issues can be addressed by various model transformation techniques [14,15]. Due to our large base of SDL models used in existing products and product lines, it is critical that a majority of the models be migrated automatically. 4.2.2. Case study: developing a network element platform with UML 2.0 One of the most complex systems modeled using UML 2.0 in Motorola has been a central subsystem of a Radio Access Network. The subsystem is responsible for high-level call processing. This subsystem is also a network element that performs complex messaging and runs processes for managing several network resources. The subsystem pilot split the system design into four separate UML models, which in total constitute approximately 8090% of the application code of the subsystem. The remainder of the code provides interfaces to various services, which are accessed from within the model through external function calls. The key lessons learned from this large-scale pilot are summarized here.



Appropriate training is necessary to help identify and effectively use the new and signicantly different constructs now available in UML 2.0. The development process itself must be adjusted to manage a model-driven methodology, allowing time for simulation and more testing much earlier in the life cycle. New solution patterns were needed when we introduced UML 2.0, such as shifting to more active classes that use asynchronous communication, and employing composition aggregations and architecture diagrams for design decomposition and reuse. Separating modeling issues from coding (or implementation) issues is needed to decrease coupling, and to provide to more exible and reusable models. Coding practices learned from using object-oriented languages need to be reevaluated because the new modeling environment and modeling action language emphasize simplicity and more reliance on abstract data types.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 597

A key deciency in the UML 2.0 tool environment is poor support for automated system validation testing. Manual testing is not practical for a product of this size and complexity.

4.3. Action language The executable extensions (or action language) used in modeling need to be at a higher level of abstraction compared to programming languages so they are closer to the customers concerns. The difference between a modeling action language and a programming language is the same as that between assembly code and a programming languagethey both specify the work that has to be performed but at different levels of abstraction. The programming language abstracts away the details of the hardware platform, while the action language abstracts away the details of the software platform, making its models independent of the software platform. In MDA terminology, it becomes a Platform-Independent Model (PIM) and is portable across software platforms and development environments. However, just like code, the action language must also be computationally complete there must be a set of well-dened execution semantics that include object creation, data manipulation, general computation, and communication. The action semantics need to specify behavior, which is independent of the underlying data representation and implementation. For example, one needs to be able to simply specify that what is needed is the largest element of a collection, instead of having to detail how to loop across a linked list of data structures to compute it. The result of this is that the underlying computational mechanisms and storage schemes used to implement a model can be changed without affecting the behavior specied in the model, and the model can be freely translated to any target. 4.4. Test automation for UML 2.0 As stated in Section 3.3, one of the signicant benets for SDL teams is being able to validate application logic in the simulation test phase. The SDL tools provided very good facilities for automated model testing. However, the UML tool vendor did not carry this test automation feature forward from the SDL environment. The SDL teams recognized that the benet of such a test automation system is so great that many of them would not migrate to UML without having similar features. A signicant part of the activities in the case study described in Section 4.2.2 involved developing extensions to the modeling tool to provide the same level of model test automation. Those extensions were initially developed based on specic testing conventions of that product. The test extensions are currently being generalized in a corporate level activity to ensure they support the model testing needs of all internal teams. A similar activity is updating the code generator to support UML 2.0. Wider deployment of UML 2.0 is currently predicated on completion of these activities. Our model-driven approach focused on developing effective techniques and tools to generate test cases from models and automate the execution of test cases for model validation. We adopted the Testing and Test Control Notation version 3 (TTCN-3) [16] as the test specication and execution framework for model validation. In the telecommunication industry, TTCN is often used as the standard test

598 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

Fig. 4. Generation of TTCN types and templates from UML.

description language. It is designed for functional and black-box testing independent of test platforms. Test cases and test suites were generated automatically from modeling constructs describing requirements and usage scenarios, including Use Case Diagrams and Sequence Diagrams [18]. These diagrams capture the interactions among system entities and explicit test cases can be generated to describe these interactions. In order to automate TTCN test case generation, data-type denitions and signal denitions referenced by the interactions in Sequence Diagrams have to be translated into TTCN representation to support the specication and validation of test cases. These data types also facilitate the specication and generation of TTCN templates. Internal tools were developed to support model-based generation of TTCN types and TTCN templates, test scripts, and test suites. These test scripts and test suites were used to automate the validation of UML models. Fig. 4 shows the translation of data types and signal denitions for a sample UML model to TTCN representations and the generation of TTCN templates using the UML Message Builder (UMB), a tool developed internally to support model-based test specication and validation. The standardization of the UML 2.0 Testing Prole [17] is expected to simplify the development and integration of testing tools. The UML testing prole provides support for UML-based model-driven testing. Modelers and testers can use the same language for test specication and model development. This allows the reuse of UML models for testing and facilitates test development in an early system development phases. 4.5. Lessons learned Our experiences show that the rich and expressive modeling constructs in UML facilitate systems engineering, architecture, design, and code generation. With the exibility of

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 599

annotations and proles, UML can be extended to applications of various domains. Compared to SDL and Structured Methods, UML offers a better solution for MDE that covers the full lifecycle of software development. On the other hand, the learning curve of UML 2.0 is steep due to the large number of modeling constructs. The main UML standard document from OMG is over 700 pages long. However, not all the diagrams and constructs are used in all phases of development, nor are they all necessarily needed for particular problem domains. Certain key elements are considered to be universally used, such as classes in Class Diagrams and use cases in Use Case Diagrams. The key is to identify which constructs are most helpful in addressing the specic issues of an individual domain. This takes experience in applying UML 2.0, and we have internally created guidelines and training courses that help identify how to effectively use modeling constructs for real-time communications systems. Once enough experience has been gained, limitations of applying UML can be addressed through the extension mechanisms of UML. One of these is the UML prole, which tailors UML to a specic domain. The other mechanism is to metamodeling where UML can be extended and modeling language (DSL) can be created. 5. An integrated approach to visual modeling Based on our experience using visual modeling, we have been able to identify common traits of those teams that get the most benet from visual modeling. Most teams that have been successful using visual modeling have also been successful at integrating visual modeling with the rest of the development lifecycle. This integration is described in three main areasproject management, sharing technical information with other teams and developing the product within the team. 5.1. Integrating with project management activities One of the rst issues teams face is how to deal with the new visual artifacts in project and quality management systems. Those systems capture productivity and defect metrics typically in terms of lines of code or pages of documentation. Review and inspection systems also typically capture locations based on page and line numbers. Review aids like change bars are taken for granted in text-based artifacts, but have only recently been available for visual models. Using a visual modeling language requires explicit denition of how those artifacts will be represented in the project, quality and change management systems. With SDL-based modeling, most teams adopt lines of Phrase Representation (PR) [10] as a size metric. This metric is not without its difculties, however. Similar to the case of trying to count generated code, the generated PR counts are sometimes different in different versions of the modeling tool. In one case, a change in modeling tool version caused an increase in size of 20% in the PR size. This variation is not desirable, but is much more stable a metric than counting the lines of generated code which has a much greater variation. Teams do not have a better alternative so are required to recalibrate the metrics when new tool versions change the PR counts. With UML users, there is no convention yet dened that has widespread use across Motorola. One signicant difculty is that while almost all UML vendor tools in use in

600 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

Motorola save model artifacts in XML format, none of them save them in the same format. Another signicant issue is that the artifacts contain both the representation of the diagrams and the representation of the model elements. In SDL PR, the representation was only of the model elements and did not capture any information about graphical layout. With the UML tools, both are available and the primary task is to gure out which best represents the size of the model for project management purposes. Finally, another difculty is that UML models permit the capture of more contextual information. Things like use case diagrams and use case realizations exist in the model, but are not part of the behavioral description used to generate product. Motorola is currently working collaboratively across the businesses to dene common practical UML model metrics that can be used for project management purposes. We are also actively working with other companies to collect data and dene standards in this area. 5.2. Integrating with external teams Integration with upstream and downstream teams is also typically harder for visual model artifacts than text ones. Traceability between requirements, model elements and test cases requires extra effort and again tool support has been available only relatively recently. Outside of the team creating the visual models, external stakeholders are not typically as familiar with the visual modeling language, and extra effort is required to ensure the communication needs are met across all stakeholders. Even when the same modeling language is used, signicant variations exist in vendor implementations and team usage conventions. Visual modeling requires extra effort to ensure every stakeholder, not just the model authors, understands the visual language and modeling conventions. Three main practices have been used in Motorola to successfully integrate with external teams.


Provide model access to all stakeholders. This typically means using a selection of three different approaches. First is to place the model itself in an accessible location and ensure external teams have licenses to read the model natively. The second approach is to publish the model as a web site which is a capability provided by all of the leading tools. The third approach is to generate a document that integrates contextual text information with model diagrams. Again, most vendors provide a model document generation capability, frequently for additional licensing costs. Many teams use all three approaches to suit the needs of all stakeholders. Document and publish modeling guidelines. External teams are most interested in where to nd specic information contained in the model. Good guidelines explain how the model is structured and what dependencies exist between different model packages. Dene conguration management for shared model elements. In text-based environments, this requirement is frequently met by creating a location for things like systemwide header les. Common location and conguration management ensures multiple teams are consistent with each other. The same issue exists when multiple teams use models. In the modeling case, model package and le structure need to be agreed upon up front to enable sharing.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 601

Even with these practices, there are potential pitfalls. The most frequently occurring problem area is the mismatch in abstraction level between Systems Engineering groups and Product Development groups. Even when both groups use modeling, it is difcult to integrate or leverage the modeling products from one group to another. A signicant open issue is how to provide incremental renement and refactoring support to teams to provide a more successful path from systems engineering to product development using models. 5.3. Integrating within the development lifecycle In contrast to project management integration and external team integration, a product development team has the most control over their internal processes. One of the surprising results of the disparate modeling efforts in Motorola is that even with the relative freedom to dene their own internal modeling approaches, the successful modeling adopters have chosen very similar solutions. The essence of these is that the model becomes the core focus of the team. Requirements analysis, architecture, design, implementation and testing are done with model artifacts and people think in terms of the model elements, not the underlying generated code. Fig. 5 shows the high level approach shared by successful modeling teams. In this integrated model-driven development approach, requirements models are created to capture the functionality of a system. Test models are derived from the requirements models and product models are created by rening the requirements models with lowerlevel details.

Wait_messages Product Models TQ_message DMA_page (pn) Code Generation

Pidlist. append (sender) Requirements Models

Traditional SMS Smart SMS


Simulation & Logic Debugging

Product code

Test Generation :MS :GTS m1 Test Models m2 m3 :MM Target Testing

Fig. 5. Integrated model-driven engineering.

602 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

The product models and test models are integrated in automated model simulation/ validation activities. This simulation testing has proven to be one of the most benecial aspects compared to legacy product development. This testing provides the early identication of application logic errors and provides an environment which makes nding and correcting such errors extremely easy compared with legacy development methods. Teams use the graphical model debugging capabilities to trace events across multiple state machines and step through state machine logic. Debugging at the level of modeling abstractions, events and states, is much more productive than the legacy approach of debugging lower level constructs such as function calls and threads. Code generators transform the model to produce the code for the nal product. This auto-generated code is integrated with platform code. One signicant approach shared by successful modeling teams is a process in which test cases in the test model can drive automated testing of both the simulation and of the target product. This philosophy provides an immediate benet to target integration. The tests are rst run in the simulation environment where application logic errors are easy to nd and correct. When the product code is generated, these same exact tests are run on the target platform. This provides what is in essence a regression test for the application. Differences in behavior between simulation and target environments make it easy to identify and isolate target integration issues. 5.4. Further integration needs This overall process is not supported completely by third-party tools. To realize this approach, we have had to use a combination of working with standards bodies, working with tool vendors and developing our own supporting modeling support tools. We have developed and continue to improve the auto-code generation tool suites to support the generation of production code from SDL and UML models [11]. Transformation systems have been developed to produce highly optimized production code customized for various product platforms. Real-time communication systems, subsystems, and components for both network infrastructures and subscriber products have been generated and shipped by using the auto-code generation tool suites. We have developed automated test generation tool suites and test execution environments for both SDL and UML systems [16,18]. The test generation tool suites enable the generation of tests directly from requirements and product models. These tools also have the capability to generate TTCN data types and templates to facilitate test specication and executions. Although we have achieved some successes in applying visual modeling techniques to the design and development of real-time wireless communication systems, the full life cycle support for model-based product development is still lacking and immature. In the realm of text-based development, there is a fairly mature ecosystem of supporting tools covering all aspects of the lifecycle including metrics, traceability analysis, static analysis, dynamic analysis, test automation, change visualization, debugging, and refactoring. Not only do mature tools exist for text-based development, but frequently any one of these areas has multiple tools by multiple vendors and even multiple open source projects to choose from. This is not true yet today in model-based development.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 603

Solutions are typically only available if they are bundled with the modeling tool by the tool vendor. We see the adoption of Eclipse by modeling tool vendors as a promising event. While many Eclipse integrations are supercial, we see deeper integration facilitating the MDE community as a whole by providing common interfaces to model resources, non-model resources and lifecycle support tools. We also see the importance of model transformation and model integration [19] to realizing this deeper integration across the lifecycle. 6. Related approaches A number of software tool vendors and service providers offer tools, processes, and methodologies to support model-driven architecture (MDA) and modeldriven development (MDD) [20]. The Rational Unied Process (RUP) [21] describes a software engineering process, including guidelines and templates for software lifecycle activities. The RUP process provides guidelines how to effectively use UML for software development. A development process specialized in real-time embedded systems is described in [22]. This process model divides software development into analysis, design, translation, and testing, enabling translation of models into executable code. Vendor tools and solutions are available to support model-based software development process, including model design, validation, and code generation. Rational Rose [23], Telelogic Tau G2 [24], and I-Logix Rhapsody [25] are some of the vendor tools we have adopted for model-based software development. We take an integrated and model centric approach to developing software products and product lines. Requirements specication and analysis, architecture, design, implementation, and testing are done with model artifacts. With the support of code generators, product code is generated from models. We adopt commercial tools and extend modeling languages to support an integrated MDE approach to the development of wireless realtime communication systems. 7. Conclusion This paper summarizes our experiences with visual modeling languages and methods for real-time wireless communication systems. We experimented with Structured Methods for requirements modeling and validation. SDL has been used extensively to model wireless communication systems and generate code from models. We are transitioning to UML to take advantage of the enhanced visual modeling features in UML version 2. Our experiences show that a MDE approach results in productivity and quality improvement. Visual modeling provides a high level of design abstraction and facilitates communication and collaboration among project teams and team members. With formal specication of system structures and behaviors, modeling facilitates design automation. Code and tests can be generated from models. For complex wireless communication systems, automatic code generation leads to substantial productivity improvement. Better test coverage can be achieved with automatic test generation.

604 M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605

We believe that the need for extension and customization is mainly due to the domainspecic and platform-specic requirements as well as various proprietary communication protocols and systems. As the MDE techniques mature, such reliance on internal extensions may be relieved or potentially eliminated. The biggest benet comes from an integrated approach to visual modeling. The integration of requirements models, design models, and test models leads to a higher degree of design automation. It has the potential to improve software quality and development productivity through automated code generation, test generation, validation and verication.

[1] P. Chen, The entity relationship modeltowards a unied view of data, ACM Transactions on Database Systems, 1976. [2] B. Selic, G. Gullekson, P.T. Ward, Real-Time Object Oriented Modeling, Addison-Wesley, Reading, MA, 1994. [3] J. Sprinkle, G. Karsai, A domain-specic visual language for domain model evolution, Journal of Visual Languages and Computing 15(2) (2004). [4] D.J. Hatley, I.A. Pirbhai, Strategies for Real-Time Systems Specication, Dorset Hourse, 1988. [5] J. Ellsbrerger, D. Hogrefe, A. Sarma, SDL, Formal Object-oriented Language for Communicating Systems, Prentice-Hall, Englewood Cliffs, NJ, 1997. [6] Object Management Group, Unied Modeling Language (UML), /http://www.omg.org/S. [7] M. Groble, M. Jiang, Model driven engineering in Motorola, in: Workshop on Model Driven Engineering, Annual Research Review, University of Southern California, 2005. [8] T. Demarco, Structured Analysis and System Specication, Prentice-Hall, Englewood Cliffs, NJ, 1978. [9] International Telecommunications Union: Message Sequence Chart (MSC), ITU-TRec. Z.120, 2000. [10] International Telecommunications Union/Telecommunications Standardization Sector, Recommendation Z.100 Specication and Description Language (SDL), 2000. [11] P. Dietz, T. Weigert, F. Weil, Formal techniques for automatically generating marshalling code from high-level specications, in: Second IEEE Workshop on Industrial Strength Formal Specication Techniques, 1998. [12] D. Clark, D. Tennenhouse, Architectural considerations for a new generation of protocols, in: Proceedings of the SIGCOMM 90 Symposium, 1990. [13] T. Weigert, J. Boyle, T. Harmer, F. Weil, The derivation of efcient programs from high-level specications, in: N. Bourbakis (Ed.), Articial Intelligence in Automation, World Scientic Publishers, Singapore, 1996. [14] K. Czarnecki, S. Helsen, Classication of model transformation approaches, in: Workshop on Generative Techniques in the Context of Model-Driven Architecture, OOPSLA, 2003. [15] J. Sprinkle, A. Agrawal, T. Levendovszky, F. Shi, G. Karsai, Domain model translation using graph transformations, in: Proceedings of the 10th IEEE International Conference and Workshop on Engineering of Computer-Based Systems, 2003. [16] European Telecommunications Standards Institute, Testing and Test Control Notation version 3 (TTCN-3), 2001. [17] Object Management Group, UML 2.0 Testing Prole, V1.0, /http://www.omg.org/technology/documents/ formal/test_prole.htmS. [18] P. Baker, Test generation towards TTCN-3, in: ETSI TTCN-3 User Conference, 2004. [19] Vanderbilt University, Model-Integrated Computing (MIC), /http://www.isis.vanderbilt.edu/research/ research.htmlS. [20] Object Management Group, MDA tools and Solutions, /http://www.omg.org/mda/committed-products. htmS. [21] IBM, Rational Unied Process: Best practices for software development teams, /http://www-128.ibm.com/ developerworks/rational/library/253.htmlS.

M. Jiang et al. / Journal of Visual Languages and Computing 17 (2006) 584605 605 [22] B.P. Douglass, Doing Hard Time, Developing Real-Time Systems with UML, Objects, Frameworks, and Patterns, Addison-Wesley, Englewood Cliffs, NJ, 1999. [23] International Business Machines, Rational Rose, /http://www-306.ibm.com/software/rational/sw-atoz/ indexR.htmlS. [24] Telelogic, TAUs G2, /http://www.telelogic.com/corp/products/tau/index.cfmS. [25] I-Logix, Rhapsody, /http://www.ilogix.com/sublevel.aspx?id=53S.