Vous êtes sur la page 1sur 332

HP eIUM

For the HP-UX, Redhat, Solaris, and Windows operating systems


Software Version: 8.0

Foundation Guide

Document Release Date: September 2012

Software Release Date: E0926


Foundation Guide

Legal Notices
Warranty
The only warranties for HP products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained
herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend


Confidential computer software. Valid license from HP required for possession, use or copying.
Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.

Copyright Notice
Copyright 2012 Hewlett-Packard Development Company, L.P.

Trademark Notices

Oracle and Java are registered trademarks of Oracle and/or its affiliates, and shall not be used
without Oracles express written authorization. Other names may be trademarks of their respective
owners.

UNIX is a registered trademark of The Open Group.

Intel and Itanium are registered trademarks of Intel Corporation in the US and other countries
and are used under license.

Linux is a registered trademark of Linus Torvalds in the United States.

Red Hat and Enterprise Linux are registered trademarks of RedHat, Inc.

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.

Page 2 of 332 HP eIUM (8.0)


Foundation Guide

Documentation Updates
The title page of this document and the below print history contains the following identifying
information:

l Document Edition, which changes each time the document is updated.


l Software Version number, which indicates the software version.
l Software Release Date, which indicates the release date of this version of the software.
To check for recent updates or to verify that you are using the most recent edition of a document,
go to:

http://www.hp.com/support/usage

Contact your HP representative for additional support details.

The following table lists the version history since the last released edition.

Print History
Edition Version Release Date

First Edition Version 4.1 September 1, 2002

Second Edition Version 4.2 May 1, 2003

Third Edition Version 4.5 November 1, 2003

Fourth Edition Version 4.5 Feature Pack 4 May 30, 2006

Fifth Edition Version 4.5 Feature Pack 5 September 5, 2006

Sixth Edition Version 4.5 Feature Pack 6 November 10, 2006

Seventh Edition Version 5.0 December 13, 2006

Eighth Edition Version 5.0 Feature Pack 2 October 15, 2007

Ninth Edition Version 5.0 Feature Pack 3 April 15, 2008

Tenth Edition Version 6.0 November 6, 2008

Eleventh Edition Version 6.0 Feature Pack 1 June 22, 2009

Twelfth Edition Version 7.0 October 1, 2010

Thirteenth Edition Version 7.0 Feature Pack 1 November 30, 2011

Fourteenth Edition Version 8.0 September 26, 2012

Page 3 of 332 HP eIUM (8.0)


Foundation Guide

Related Documents
Refer to the following documents (including this document) for further information.

HP eIUM documentation set


Document title Description

HP eIUM Release Provides release-specific topics, such as new features, software and hardware
Notes requirements, and known problems and workarounds.

HP eIUM Provides eIUM concepts and design guidelines, including:


Foundation
l Collectors and collector components
Guide
l Designing and configuring an eIUM deployment
l CDR detection and error handling
l File services and Data Delivery Agent
l Using NMEs and the NME Schema Editor
l Using the Common Codec Framework to structure and transform your data
l Creating reports and auditing
l Managing eIUM with OpenView

HP eIUM Provides installation-related topics, including:


Installation
l System requirements and installation prerequisites
Guide
l eIUM installation and activation instructions
l Product Extension setup (MySQL database installation and activation,
TimesTen database installation and setup on Linux, Real-Time Engine
installation)
l eIUM upgrade instructions
l How to activate additional components
l Securing eIUM
l Deactivating and uninstalling eIUM

HP eIUM Provides configuration and administrative guidelines, such as:


Administrator's
l Using the eIUM Launchpad, Operations Console access, eIUM commands,
Guide
and the eIUM Console command-line tool
l Using and configuring the Reference Data Manager
l Creating and managing collectors and other servers
l Managing host systems and the configuration server
l Managing users and security

Page 4 of 332 HP eIUM (8.0)


Foundation Guide

Document title Description

l Correcting CDRs with the CDR Editor


l Backing up your deployment
l Maintaining and troubleshooting eIUM
l Tuning for performance
l Administrative functions related to the Operations Console

HP eIUM Provides Operations Console usage information (also in help system embedded
Operations in the web application), such as an orientation to the interface, monitoring
Console User capabilities for the deployment, groups, and processes, as well as common
Guide usage scenarios.

HP eIUM Provides eIUM available component information, indexed by category and


Component package, as well all attributes, and sample configurations.
Reference

HP eIUM Describes collector and Session Server templates pre-configured to read from
Template particular data sources. The templates can be used to create collectors and
Reference Session Servers quickly.

HP eIUM Provides eIUM available commands, all possible arguments/options, and


Command sample usage.
Reference

HP eIUM Real- Describes the Session Server, the Real-Time Charging Manager, and the Real-
Time Guide Time Engine in the eIUM system.

HP eIUM Studio Describes how to use the HP SNAP Studio to perform business, technical, or
User Guide administration tasks.

Real-Time Provides contextual help in HP SNAP Studio.


Engine Online
Help

HP eIUM Real- Describes the Real-Time Engine system and functionality overview,
Time Developer development considerations, and reference information.
Guide

HP eIUM Real- Provides performance tuning and sizing tips and guidelines for the Real-Time
Time Engine.
Performance
Tuning and
Sizing Guide

HP eIUM Load Describes how to use the optional Load Balancer capabilities of eIUM.
Balancer Guide

HP eIUM SPR Introduces the Subscriber Profile Repository (SPR) and describes how to install
Guide and configure the SPR in eIUM.

Page 5 of 332 HP eIUM (8.0)


Contents
Foundation Guide 1
Contents 6
Overview 19
eIUMFoundations and the Business Context 20
Business Problem 20

Voice and Convergent Mediation 22

Introduction to eIUM 22

eIUM Processes 23

Collector Hierarchy 26

Key Characteristics 26

Collector Components 28

What is a Collector? 30
Collector Components 30

Encapsulator Overview 31

The Parser and NME Attributes 31

The FlushPolicy, the FlushProcessor and FlushTesters 31

Aggregator Overview 32

Aggregation Tree 32

Multiple Aggregation Schemes 32

Correlation Aggregation Schemes 33

Configuring Aggregation Rules 33

Datastore Overview 33

Datastore Types 33

Datastore Type Summary 35

Changing Datastore Types 35

Database Tables 36

Datastore Table Names and File Names 36

Table Names Example 1 36

Page 6 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Table Names Example 2 37

Table Aging and Table Rolling 37

Designing an eIUM Deployment 40


Sample Deployment 40

Determine the Output Applications and Data Requirements 41

Determine the Input Data Sources 41

Determine the Data Processing 42

Design a Real-Time Prepaid Charging Solution 42

Configuring eIUM 44
Configuration and the eIUM Deployment Hierarchy 44

The Complete Configuration Hierarchy 46

Collector Configuration Nodes 48

Configuration Attributes 49

The ClassName Attribute 50

Sample Configuration 50

Modifying the Configuration 52

Modifying the Configuration using Launchpad 52

Modifying the Configuration from the Command Line 52

Modifying the Configuration with configmanager 53

Linked Collectors 53

Unlinked Collectors 53

Linked Collectors 54

Advantages of Linked Collectors 54

Creating Linked Collectors 54

Create a Master Collector Template 54

Creating a Linked Collector from the Master Template 55

Creating Additional Linked Collectors 55

Example Linked Collector 55

NMEs and the NME Schema 56

NME Attribute Types 57

Add NME Attributes to the NME Schema 57

Structured NMEs and the Structured NME Schema 57

Page 7 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Detecting and Handling Errors in CDRs 60


Overview of Detecting Errors and Handling Errors 60

Solution Architecture 62

Enabling Error Detection and Parking 63

Syntax 64

Example 64

Detecting Validation Errors Using PostParsedRules 65

Example 65

Parking Error CDRs 66

Error Scheme 66

Syntax 66

Example 66

Output Format of Error CDRs 67

Configuration Summary 67

Encapsulator Configurations 67

Aggregator Configurations 67

Datastore Configurations 68

Configuration Example 68

Correcting Error CDRs with the CDR Editor 70

Reprocessing Corrected CDRs 70

Configuration Example 71

Processing Corrected CDRs 72

Parsers that Support Error Handling 72

Detecting and Handling Duplicate CDRs 74


Introduction 75

Accuracy of Duplicate Detection Rules 75

Range of NMEs to Examine for Duplicates 75

Design Questions 75

DuplicateNMEDetectorRule 76

Recovery 76

Aging Mechanism 76

DuplicateCDRDetectorRule 76

Page 8 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Recovery 77

Aging Mechanism 77

TimeHashDuplicateCDRDetectorRule 77

Recovery 78

Aging Mechanism 78

TimeIntervalDuplicateCDRDetectorRule 78

Recovery 79

Aging Mechanism 79

When to Use Each Rule 79

Performance Comparison 80

Performance Statistics 81

Disk Usage Statistics 81

Accuracy and Reliability 82

Deploying the File Service 84


Multiple Switches Generating CDR Files 85

Benefits of the File Service 87

Conceptual Structure of the File Service 87

The File Service Components 87

File Collection Service Components 88

File Collector Components 89

File Distribution Service Components 89

Collectors that Read CDR Files from a File Service 90

The Notification Table 90

CDR File Ownership and Cleanup 91

Recovery 91

Configuration Structure of the File Service 92

File Collection Service Components 92

File Collection Service with a Preprocessor 93

File Distribution Service Components 94

Collector Components 96

Designing and Deploying a File Service 96

Determine Input Data Sources 96

Page 9 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Determine the Number of Output Collectors 97

Configure Collectors 97

Test, Measure and Adjust 97

Creating a New File Service with the File Service Wizard 97

Examine and Modify Your File Service 102

Verifying File Service Components 103

Administering the File Service 103

Using the Launchpad 103

Viewing File Service Statistics in the Launchpad 103

File Collection Service Statistics 103

File Distribution Service Statistics 104

Viewing File Service Statistics in the Operations Console 104

Using File Service Commands 104

Understanding the Data Delivery Agent 106


Collector Components and the Data Delivery Agent 106

The Data Delivery Agent 106

The DeliveryAgent Component 108

Delivery Agent That Sends Files 108

Delivery Agent That Sends NMEs 109

The Transport Component 110

The NMEChannel Component 110

Configuring the Data Delivery Mechanism 110

Recovery Behavior 111

Delivery Agent Configuration Location 111

Transport Component Configuration Location 111

Transport Component Configuration Attributes 112

Transport Component for NME Delivery: DeliveryNMEAgent 112

Example 1: Sending NMEs 112

Example 2: Sending Files 112

FTP Data Delivery 113

The DeliveryFTP Component 113

DeliveryFTP Configuration Location 113

Page 10 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Example: FTP Delivery 114

JDBC Data Delivery 114

The DeliveryJDBC Component 115

DeliveryJDBC Configuration Location 115

eIUM SQL Data Type Mappings 115

Using Structured NMEs 118


Overview of NMEs 119

Traditional NMEs and the NME Schema 119

Structured NMEs 120

Defining Structured NMEs in the NME Schema 120

NMEAdapters 120

Operating on NMEAdapters 121

NME Schema Loader 121

Encapsulators 121

Rules 122

Datastores 123

Key Points 123

Defining Structured NME Types 123

Templates for Collectors Using Structured NMEs 123

Components of Structured NME Types 124

Syntax of NME Type Definitions in the NME Schema 124

Namespaces 125

Type Aliases 126

Syntax 126

Primitive NME Attribute Types 127

Predefined Type Aliases 127

Other Attribute Types 127

Structured NME Types 128

Syntax to Define NME Attributes 128

Example NME Type Definition 129

Loading NME Type Definitions 130

Example 130

Page 11 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Using Structured NMEs 131

Example 131

Moving to Structured NMEs 132

Using NMEAdapters in a New Deployment 132

Using NMEAdapters in an Existing Deployment 132

Using the Structured NME Schema Editor 134


Starting the NME Schema Editor 134

Name Spaces, Type Aliases and Structured NME Types 136

Building a Structured NME Schema 137

Adding a New Namespace and NME Type 137

Adding NME Attributes 138

Saving Your NME Types 140

Adding Type Aliases 140

Adding Arrays and Optional Attributes 141

Using the Common Codec Framework: Structuring and Transforming your


Data 142
Overview 143

The IUM Studio 143

Repository Integration 144

CCF Components 145

CCF Commands 145

CCF:The Codec Layer 146

Language Overview 146

Transformations as Field Codecs 148

Format Definitions for Different Data Types 149

Text Data 149

Fixed-Length Fields 149

Delimiter-Separated Fields 150

Fields Defined by Regular Expression Patterns 150

Field Formatting Patterns 152

Raw Binary Data 152

Padding 153

Page 12 of 332 HP eIUM (8.0)


Foundation Guide
Contents

XML Data 153

Mixed Data 156

XFD Syntax Reference 157

<format> 157

<root> 158

<type> 158

<codec> 159

<sequence> 159

<choice> 159

<set> 160

<array> 160

<field> 161

<switch> 161

<raw> 162

<text> 163

Format Definition Example 163

Using xfdtool to Convert External Schemas 166

Format Definitions in IUMStudio 166

Starting IUMStudio 166

About XFDFile View 172

Source Tab 172

Types Tab 175

Codecs Tab 189

Using the XFD-to-XSD Wizard 190

CCF: The Transformations Layer 195

Simple Transformations 196

Transformation Parameters 196

Integrating Custom Transformations 196

Transformations Integration with the Codec Layer 196

Transformations and Business Rules: The TransformationRule 197

Attribute Transformation 197

Transformations Language Descriptor 197

Page 13 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Component Configuration 198

Language Overview 199

Transformations Syntax Reference (TX) 199

<transformations> 199

<format> 199

<transformer> 200

<transform> 201

<script> 201

<chain> 202

<array> 202

<use-transformer> 202

<param> 203

<arg> 203

Transformation Example 204

<use-transformer> Class Attribute Values 207

Integration with the Schema Layer (XSD) 224

Integration with Format Definitions (XFD) 224

Transformation Definitions in IUMStudio 225

Source Tab 225

Design Tab 226

Main View 226

Transformers View 228

Composite Transformations 240

Chain Transformations 240

Array Transformations 241

Array Transformations Parameters 241

Structural Transformations 242

Array Attributes 242

NME Arrays (com.hp.usage.array Package) 243

NormalizedMeteredEvent (Flat NMEs) 245

Script Transformations 247

CCF: The Schema Layer 249

Page 14 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Language Overview 249

Beans Schema Syntax Reference (XSD) 249

<schema> 249

<complexType> 250

<element> 251

<simpleType> 252

<annotation> 253

<appInfo> 253

Arrays Definition 254

Dedicated Named Array Type 254

Anonymous Array Type 254

Beans Schema Language Usage Example 255

SNMESchema Syntax Reference (XSD) 256

<schema> 257

<complexType> 258

<element> 259

<simpleType> 260

SNME Primitive Types Support 260

Arrays Support 261

Primitive Types Array 261

NME Array Type 261

SNME Declaration Language Examples 262

eIUM Configuration 262

SNME Schema XSD Declaration Language 263

Extending Beans and NME XSD Types 264

Defining Beans and NME Types Documentation 265

Schema Definitions in IUM Studio 266

CCF Components 272

Usage Scenarios 273

Creating Reports 278


Overview 278

Reporting Components 279

Page 15 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Report Types 279

Report Parameters 280

Basic Reporting 280

Before You Begin 280

Start the Web Application Server 281

Create a Report Collector using LaunchPad 281

Create a Report Collector using the Reporting Wizard 282

Create a Report 284

Run Reports 286

Advanced Reporting 287

Architecture 287

Change the Web Application Server Start-Up Configuration 289

Customize the Reporting Interface 289

Auditing for Revenue Assurance 292


Introduction 293

Revenue Assurance 293

Auditing Basics 294

Auditing Overview 295

Enable Auditing 295

Disable Auditing 297

Monitor Auditing 297

Set Audit Log Level 297

View Audit Log 298

View Audit Data 299

Audit Data Collection 301

Audit Points 301

Encapsulator Audit Points 302

Aggregator Audit Points 302

Audit NMEs 303

Input Source Audit NME 304

Input Dataset Audit NME 305

Output Dataset Audit NME 305

Page 16 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Exception Audit NME 306

Audit Operations 307

Session Audit 308

Simple Session Auditing 308

GPRS Session Auditing 309

Correlation Audit 310

Audit Data Processing 311

Audit Verification 311

AuditAdornmentRule 312

Audit Data Storage 313

Cautions 313

Audit Tables 313

Audit History Tables 315

Audit History Table Aging and Rolling 316

Audit Data Added to Collector History Tables 316

Dataset Audit - NME Count 317

Aggregation Scheme Audit 317

Scheme Audit - NME Count 318

Scheme Audit - Session 318

Scheme Audit - Correlation 319

Customizing Audit 320

Custom Audit Data Collection 320

Custom Audit Verification 320

Analyzing Audit Reports 320

Create and Start the Audit Report Server 321

View Daily Audit Reports 322

Exception Report 322

Collector Data Output Report 323

Total Summary Report 323

Collector Data Source Report 323

Correlation Report 323

Inter-Collector Report 324

Page 17 of 332 HP eIUM (8.0)


Foundation Guide
Contents

Session Report 324

Create Audit Reports 325

Run Audit Reports 325

Managing eIUM with HP OpenView 328


Methods of Managing eIUM 328

Activating OVO Management of eIUM on each eIUM Node 329

Making eIUM Log Files Available to OpenView Operations 329

Notifying OpenView When a Collector Starts or Stops 329

Activating OVO Management of eIUM on the OVO Management Console 330

Uploading OVO Configuration 330

Configuring eIUM Node Information into OVO 330

Distributing OVO Managed Node Information to Agents 331

Page 18 of 332 HP eIUM (8.0)


Foundation Guide
Overview

Overview
The HP Internet Usage Manager (eIUM) Software enables service providers to analyze the usage of
their infrastructure and bill customers accurately. It also provides service authorization and protocol
translation capabilities. This overall guide describes the architecture and operation of eIUM in
technical detail. This introductory topic introduces the key features of the product.

eIUM is one of the leading convergent mediation and usage management platforms for voice, IMS,
DSL, cable, VoIP, 3G mobile, WiFi/LAN and many other networks and technologies. eIUM is a real-
time, convergent and adaptable mediation platform that can help you mediate todays networks for
billing, monitor and manage quality of service and capacity, as well as prepare you for future IMS
networks.

For examples of eIUM in action, see the IUM Voice and Convergent Mediation Case Study and the
WiFi/WLAN Case Study - A Real-Time Prepaid Solution for Wireless HotSpot Services. Also see the
eIUM Administrators Guide and eIUMReal-time Guide.

Page 19 of 332 HP eIUM (8.0)


Chapter 1

eIUMFoundations and the Business Context


This topic series introduces eIUMin the context of the business problem that it solves, its role in
convergent mediation, and describes its high-level components.

Business Problem 20

Voice and Convergent Mediation 22

Introduction to eIUM 22

eIUM Processes 23

Collector Hierarchy 26

Key Characteristics 26

Collector Components 28

Business Problem
The convergence of voice and data traffic and of wireline and wireless networks presents service
providers with difficult technical challenges along with attractive revenue opportunities. New access
technologies like broadband, fiber, and wireless are fueling growth in data and voice traffic, making
it harder for service providers to keep pace with their competitors. Because profits from basic
access provision are diminishing, service providers must offer value-added servicesfrom
messaging to video-conferencing and Virtual Private Networks (VPN) to Voice-over-IP (VoIP)in
order to stay competitive. The challenge for providers today goes well beyond the provision of
reliable bandwidth.

Video

Wireless

BSS / OSS
Audio

Cable

Internet
News

Fiber

Charts

DSL

Subscribers Services

Many providers are compelled to offer voice and data services in bundled packages with flexible
billing plans. But in order to offer such services, providers must combine components from several
different partners, suppliers, and internal systems. Moreover, they must be able to predict and

Page 20 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

manage capacity, analyze subscriber behavior, and use this business intelligence to price their
services for maximum profitability. Most importantly, providers must be able to bill for different
classes of subscribers and servicesquickly, accurately, and in the format that customers want.

In the traditional voice world, the public switched telephone network (PSTN) connection generates a
call detail record (CDR), which includes information about the duration, origin, and destination of the
connection. Because the circuit-switched network keeps good records, traditional billing systems
work accurately and efficiently for voice services.

Data (IP) traffic presents a much more complex billing problem than voice traffic. A traditional billing
system provide little visibility into the actual use of IP networks. These systems cannot distinguish
between types of IP traffica video-conferencing application and basic email, for exampleand bill
for content and usage accordingly.

To reliably analyze and charge for IP services, providers must be able to collect a variety of usage
metrics such as megabytes of data transferred, number of messages sent or services used, or even
the actual monetary amounts involved in e-commerce transactions. They must then collate them
into manageable aggregates of information, correlate them with user information, and send the
output to a billing or other business support system. In short, providers need a platform that
mediates between the IP service infrastructure and business support systems.

To implement usage-based billing and management of IP-based services, service providers need to
overcome several challenges:

l In IP networks, there is no single device that records the details of a transaction. Because
network elements are not originally designed for measurement of IP traffic volume, collecting
usage data from several sources can be a difficult process.
l In IP networks, a single subscriber action can involve several network elements, each performing
a discrete operation. When a subscriber performs an actiondownloads a file, for examplethe
business support systems may need to represent it as a single customer usage record. The
service infrastructure, however, uses many heterogeneous elementsrouters, Web servers,
DNSto satisfy the users need. Although these network elements log their activities, they do
not typically identify the user. To create the customer usage record, the activities of several
network elements must be correlated with the subscriber identity.
l IP services generate huge volumes of usage records that can overload business support systems
unless the data is aggregated.
The HP Internet Usage Manager Software is a mediation platform that is specifically designed to
address these challenges. eIUM collects usage events from network elements and converts them
into information that operational and business support systems (OSS/BSS) can consume, whether

Page 21 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

they perform retail or wholesale billing, fraud detection, traffic analysis, network planning, or churn
management.

Voice and Convergent Mediation


With the convergence of voice and data, of wireline and mobile services, ISPs and
telecommunications businesses need to deploy new, flexible infrastructure within their networks
and operational and business support systems. With time-to-market pressures, ROI concerns, and
the potential risk to current production operations, new systems must be implemented in a way
that maximizes total return on assets and insulates current systems and processes from the
heightened workload brought on by introducing new services.

eIUM provides a flexible, scalable platform for deploying the convergent mediation and usage
management solutions required to operate complex service provider infrastructures and maximize
service revenue. eIUMs flexibility enables it to easily complement existing infrastructure and
processes, improving investment returns and minimizing risk. eIUM mediation solutions can be
deployed for wireline and wireless networks to support voice and data services, and to support
prepaid and postpaid billing models.

Mobile service providers provide a wide variety of services, including the following voice and data
transmission services to their subscribers over GSM/GPRS and other networks:

l Mobile phone service.


l Mobile internet access.
l Multimedia messages, including voice, still images, and video.
eIUM is a converged mediation platform. eIUM reads network usage CDRs and performs validation,
call matching, call assembly, record formatting and forwarding to the billing system to properly
invoice subscribers. HP eIUM enables more accurate billing and reduced revenue leakage by more
thoroughly and accurately gathering the raw network usage information and processing it more
effectively and completely.

Introduction to eIUM
HP Internet Usage Manager Software (eIUM) is a usage mediation and management platform for
wireline or wireless networks carrying voice or data services. eIUM employs a scalable, distributed
architecture to collect, aggregate, and correlate usage data from your service infrastructure and
present the data to your business support system for billing or analysis. eIUM can capture data from
all (OSI reference model) network layers.

Page 22 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

eIUM also provides a Real-Time Charging Manager that supports prepaid services, hot billing, and
real-time charging by providing a real-time authorization and accounting request and response
mechanism. eIUM can be configured to receive authorization and accounting requests from
application servers and service control points, perform any needed conversion and processing on
the request, query other servers such as rating, user repository, and balance manager, determine if
the request can be granted using your business rules, and send the response back to the application
server or service control point, all in real time. For more information, see the eIUM Real-time Guide.

eIUM Processes
eIUM consists of several important processes:

eIUM Processes
Process Descrption

Collector The primary constituent of eIUM mediation, a collector is a Java process that can read,
normalize, filter, and aggregate usage events or session records according to specific
rules. The number of collectors in your deployment depends on the number and variety of
data sources in your infrastructure and on the amount and type of preprocessing your
business application needs.

Session Server The session server is the main component that implements the Real-Time Charging
Manager. The Charging Manager provides rule-based event processing that enables you to
rapidly create new prepaid and hot-billed services. It provides authorization and
accounting for individual or bundled service packages that span multiple service offerings
and charging methods.

The session server is essentially a container that holds one or more connectors, one or
more rule chains, and one or more session stores. You can configure as many session
servers as you need, typically at least one for each protocol. In a highly available
environment, you might have a second standby session server for each primary session
server. For more information, see the eIUMReal-Time Guide.

Configuration The configuration server maintains the configuration of every collector. It stores all of the
Server configuration information in a central configuration store. Collectors retrieve their
configuration from this configuration store when they start up. Applications that interact
with collectors query the configuration server to get a collector's CORBA address or IOR.

Administration Present on each host in the deployment as either a Windows service or a UNIX daemon,
Agent the admin agent is the first eIUM process to start. The admin agent then starts all other
eIUM processes on that host. It also communicates between the configuration server and
the collectors.

Database eIUM stores metadata and processed data in the MySQL embedded database. eIUM also
Engine allows you to configure a pre-existing database, such as Oracle, instead.

LaunchPad LaunchPad is the administrative interface to an eIUM deployment. It enables you to


perform most tasks associated with the deployment, from creating and configuring
collectors to monitoring deployment status.

Operations The Operations Console is a web-based application designed for eIUM operators who need
Console to monitor and manage an eIUM deployment on a daily basis. Operators do not necessarily
need to configure a deployment or perform detailed operations, however, they need to

Page 23 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

Process Descrption

know the status of all eIUM processes. In contrast to the LaunchPad, for an operator who
needs to monitor and manage an eIUM deployment on a day-to-day basis to ensure that
processes are running smoothly, the Operations Console is an ideal tool. Among its
capabilities, it can give you an at-a-glance, global view of the health of your eIUM
deployment, view and monitor all the eIUM processes, and see alarms and problems with
eIUM processes immediately. You can also use it to create process groups for easier
monitoring and management, create history graphs to show process activity over time,
and drill down to process events for troubleshooting. Operators can also perform routine
management operations, such as starting and stopping processes and groups, and
changing log levels. The Operations Console also provides role-based security to limit
operations capabilities for different users. For more information on the Operations
Console, see the eIUM Operations Console User Guide.

Correlator A special-purpose collector, a correlator reads usage data from usage collectors and
session data from session collectors. It then combines these two sets of data, matching
usage with users. For example, it may combine usage data from Cisco routers with session
data from RADIUS sessions to associate network usage with corresponding users.

Report A special-purpose collector, a report collector obtains processed data from other
Collector collectors and stores the reporting-related information in the database.

Web A web application executed by the Apache Tomcat servlet engine (from the Apache
Application Software Foundation) embedded in eIUM. The Web Application Server is a Java process
Server that supports the eIUM Operations Console, Reference Data Manager, eIUM Reporting and
eIUM Audit Reporting.

Schedule An optional process, the Schedule Server allows eIUM (or external scripts) to perform
Server specific operations at scheduled intervals.

Management A real-time central server effectively used as a container for the ManagementService and
Server PollingService. The ManagementServer is intended to serve as a back-end for all
managing clients, such as the Operations Console, Reference Data Manager, and Real-
Time Engine. The default management architecture implies a single ManagementServer
configured for a distributed eIUM deployment, but eIUMusers can also configure multiple
ManagementServers for higher availability. Integrating with Real-Time Engine product
extension, it can also monitor and manage applications built on the Real-Time Engine, and
other subsystems of the Real-Time Engine (for example, the eIUMrepository, and the
eIUM Load Balancer). The following figure shows the relationship between common
eIUMcomponents (to include the optional Real-Time Engine and some of its components,
such as HP SNAP Studio), and the management platform that the central management
server is at the core of.

Page 24 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

Process Descrption

The central management server supports multiple active instances, and there is no
shared state between each instance. However, a master management server is
configured in the system. When the master server crashes, the managed server connects
to the secondary instance. When the master server is recovered, the managed server
connects to the master server. The central manager can be scalable by partitioning the
managed servers. The partition of each managed server is connected to a central
manager pair. In most the cases, there are two current instances to support high
availability, and there is no partition required if the quantity of managed servers is small.
The process quantity in most cases is two, but can be 2 * N (where N is the partition
quantity).

The central management server includes an execution profile, which is comprised of the
micro kernel, common services, and an enterprise service operation manager. Using JMX
or RMI, the central management server calls remote interfaces on the managed servers
to collect system state information, and invokes management interfaces on managed
servers. The Real-Time Engine's HP SNAP Studio application need only communicate with
the central management server to monitor the whole eIUM system. For more information
on the eIUM Real-Time Engine, SNAP Studio, and the Load Balancer, see their respective
user guides (eIUM Real-Time Guide, eIUM Studio User Guide, and eIUM Load Balancer Guide).

Repository A server used as a content repository where you can upload, check in, and check out files
Server so other eIUMprocesses can utilize them (for example, for use with the eIUMConsole,
Common Codec Framework/eIUMStudio, and Real-Time Engine product extension).

Studio Server A component of the Real-Time Engine, the Studio Server supports multiple active
instances for scalability, and there is no shared state between each instance. The HP SNAP
Studio server is linearly scalable, and each instance can work as the backup of others. In
most cases, there are two instances coexisted to support high-availability. Typically, there

Page 25 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

Process Descrption

are two processes, but the process quantity can be increased to any number of processes.
The Studio Server includes an execution profile, which is comprised of the micro kernel,
tools, common services, and application data model library. The Studio Server is also part
of the Real-Time Engine (HP SNAP) Studio subsystem, which is the integrated web-based
graphical interface subsystem in the Real-Time Engine. It serves business users who need
to manage a product catalog, or technical users who want to use the advanced extension
capability in the software. An OMC (Operations and Maintenance Console) for the OA&M
function is also integrated in the Studio subsystem. HP SNAP Studio can be used to design
and mange real-time applications, and uses RMI to communicate with the eIUM Repository
Server to read and deploy business components. For more information, see the eIUM Real-
Time Guide and eIUM Studio User Guide.

Collector Hierarchy
In any deployment, there are collectors that read usage and session data directly from the network
elements. These leaf collectors perform the first level of processing close to the network and
service elements, which minimizes network impact and maximizes reliability. They capture and
aggregate the data they receive (or query) and hold it in memory until they have received all the
data available for a specified time interval. They then flush the data to the local disk.

Intermediate collectors request the aggregated data from leaf collectors and perform operations
on these records. Finally, terminal collectors may obtain data from the intermediate collectors and
send processed data to business support systems. The leaf, intermediate, and terminal collectors
together constitute a collector hierarchy.

Raw usage data is thus transformed as it moves up the collector hierarchy into information that
business and operational support systems can consume.

Key Characteristics
eIUM is a comprehensive IP mediation and usage management solution with open interfaces to a
wide range of applications and data sources. eIUM has several strategic advantages compared to

Page 26 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

competitive products:

l Scalability. eIUM provides carrier-level scalability across multiple networks and multiple
geographical locations, and to millions of subscribers.
Many mediation platforms use a central relational data store. Although relational data stores
can handle inputs from multiple clients, bulk inserts or the addition of clients is a costly
operation that severely affects performance and scalability.
In contrast, each eIUM collector aggregates usage data in a compact tree data structure and
then stores it locally in its own data store. Because only the essential views of aggregated data
are moved up the collector hierarchy, network traffic is significantly less than that in centralized
architectures.
l Reliability. eIUM collectors run autonomously, so if one collector fails, others can continue to
gather and aggregate data. The collector's local data store allows usage data to persist and be
retrieved after a failed component has been recovered. eIUM can also be deployed on a high-
availability platform like MC/Serviceguard for HP-UX platforms or Marathon for Windows
platforms.
l Extensibility. System integrators and developers can easily extend eIUM functionality by creating
new encapsulators, parsers, rules, or application data stores by taking advantage of the
extensibility and reusability of eIUM's pure Java implementation. For example, you can quickly
develop a new encapsulator to read data from new file formats, data streams, network
equipment, or APIs, and plug the encapsulator into an existing collector.
l Manageability. eIUM allows you to distribute collectors throughout the infrastructure and yet
manage the deployment centrally. eIUM achieves this by having the distributed collectors obtain
configuration information from a central configuration server, which provides a complete view of
the deployment. Collectors retain a cached record of their configuration and employ a local
agent to manage local collectors.
Each collector supports a notification mechanism to execute a program, script, or operating
system command after completing a flush of data to the datastore.
eIUM provides a graphical user interface as well as command line utilities with which you can
administer, configure, and query eIUM components. eIUM is also integrated with HP OpenView,
allowing you to monitor and manage the entire mediation system from a central OpenView
console.
l Security. The eIUM security framework provides a robust authentication and authorization
environment based on industry standards to ensure security compliance. It allows you to
leverage your existing IT infrastructure by integrating with your enterprise wide security
infrastructure, your external authentication systems such as Kerberos, NIS, LDAP or NTLM. It
supports multiple authentication mechanisms and transport protocols transparently, as well as
supporting multiple profiles of transport layer security (TLS) combining authentication with
optional data encryption. eIUM security also captures all activity information for auditing
purposes, and brings flexibility to credentials management by delegating it to a mature and
standard authentication system. Security helps users to keep one secure set of credentials for
access to all applications, and allows eIUM to participate in a single sign-on processes and share
enterprise-wide identities with other systems.
l Flexibility. eIUM consists of general-purpose, configurable modules that can be easily adapted to
meet changing business requirements. For example, a single rule engine can run multiple,

Page 27 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

parallel aggregation schemes to support the various needs of billing, marketing, and operations
management systems.
l Open Standards. eIUM is written in 100% Java and supports open industry standards such as
CORBA. It provides open, documented interfaces to allow equipment and application vendors to
input data into eIUM or export data out of it. Its plug-in architecture makes it easy for developers
to extend its functionality by adding new features or incorporating new equipment and
applications.
HP is a founding, charter member of the IPDR (Internet Protocol Data Record) industry forum,
which is working on standardizing the IP Data Record collection and presentation to various
applications.

Collector Components
This section describes what collectors are and the main components that comprise collectors. For
descriptions of collector templates provided with eIUM, see the eIUM Template Reference. For
complete details on all collector components, see the eIUM Component Reference.

A collector consists of three components:

l The encapsulator reads the input data and places the data into a Normalized Metered Event or
NME, the data record format used by all eIUM components.
l The aggregator processes the NME data.
l The datastore stores the NME data and formats it for use by other collectors or applications.
Collector

Aggregator
raw processed
usage business
Encapsulator
data data
Datastore

reads processes formats &


data data stores data

You can replace any component with another of the same type.

Encapsulators can read usage records from many sources:

l Voice switches, such as those from Alcatel, Ericsson, Lucent, Nokia, Nortel and Siemens
l VoIP devices
l WLAN access point devices and service control points
l AAA servers that use protocols such as Diameter and RADIUS
l IP routers, such as Cisco Netflow
l Web server log files
l Proxy server log files
l SNMP and RMON MIBs
l Mobile voice and data traffic from GSM/GPRS or CDMA switches

Page 28 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 1: eIUMFoundations and the Business Context

Aggregators can perform many operations on usage records:

l Accumulate fields from many records


l Combine multiple records into one record
l Copy fields from one record to another
l Swap field values within a record
l Add fields to a record
l Perform arithmetic operations on numeric data
l Perform logical operations on binary data
l Filter out records based on many different conditions
l Conditional processing of records
l Query LDAP directories, DNS servers, and databases
l Group records based on any field values
Datastores can send processed data to many different output formats:

l MySQL database, included with eIUM.


l Third-party relational databases such as Oracle.
l IDR (Internet Data Record) files in HTML, XML, or plain text.
Refer to the HP eIUM Template Reference for a full list of eIUM collectors that have been
preconfigured for specific data sources. To collect usage records from a data source that is not in
this list, you can assemble a collector from the collector components provided with the product. See
the HP eIUM Component Reference for a full list of collector components and their configuration
details.

Page 29 of 332 HP eIUM (8.0)


Chapter 2

What is a Collector?
This series of topics describes the components of a collector, the fundamental process in an eIUM
mediation system. For complete details on all eIUM components, see the eIUM Component Reference.

Collector Components 30

Encapsulator Overview 31

The Parser and NME Attributes 31

The FlushPolicy, the FlushProcessor and FlushTesters 31

Aggregator Overview 32

Aggregation Tree 32

Multiple Aggregation Schemes 32

Correlation Aggregation Schemes 33

Configuring Aggregation Rules 33

Datastore Overview 33

Datastore Types 33

Datastore Type Summary 35

Changing Datastore Types 35

Database Tables 36

Datastore Table Names and File Names 36

Table Aging and Table Rolling 37

Collector Components
This section describes what collectors are and the main components that comprise collectors. For
descriptions of predefined collector templates provided with eIUM, see the eIUM Template Reference.
For complete details on all collector components, see the eIUM Component Reference.

A collector consists three components:

l The encapsulator reads the input data and places the data into a Normalized Metered Event or
NME, the data record format used by all eIUM components.
l The aggregator processes the NME data.
l The datastore stores the NME data and formats it for use by other collectors or applications.

Page 30 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

Collector

Aggregator
raw processed
usage business
Encapsulator
data data
Datastore

reads processes formats &


data data stores data

Encapsulator Overview
The encapsulator component of a collector reads usage data from a network data source, for
example from a file, from a voice or data switch, from an application or from another collector. eIUM
includes many encapsulators, each encapsulator corresponding to a particular type of network data
source, session source, or other data source. Each of the preconfigured collector templates
available when you create a new collector uses one of the encapsulator components described in
the eIUM Component Reference.

All eIUM components use configuration attributes to define their behavior. The encapsulator and
parser configuration attributes define how Normalized Metered Events (NME) are generated and
populated with data from the input data. All configuration attributes are declared in the
configuration server in a hierarchical tree structure. The encapsulator must be configured at the
following location or node in the configuration tree:
/deployment/<hostname>/<collectorname>/Encapsulator

The Parser and NME Attributes


The parser is an important subcomponent of most encapsulators that parses the event data record
received by the encapsulator and creates an NME to be processed by the aggregator. NMEs are
comprised of data fields called NME attributes, such as a usage record's start time, end time,
source IP address, destination IP address, number of bytes transferred, user's login ID, account
number, and so on. An NME attribute is just a field in an NME. The parser recognizes fields from the
input source and maps each one (normalizes them) to a field in a Normalized Metered Event (NME).
The NMESchema node in the configuration tree lists all of the preconfigured NME attributes. You
can add your own NME attribute names to the NME schema.

Some encapsulators do not need a parser, for example a collector that reads from another
collector. In this case, the parser configuration node does not need to exist. If the parser
configuration entry does exist, it must have the following path name:
/deployment/<hostname>/<collectorname>/Encapsulator/Parser

The FlushPolicy, the FlushProcessor and FlushTesters


The FlushPolicy is another subcomponent of the Encapsulator and it controls when aggregated
NMEs are moved from the in-memory aggregation tree to the datastore. When you configure the
flush policy, you need to balance the cost of recovery versus the amount of aggregation to be
achieved. There is only one flush policy per collector. The FlushPolicy is only used when flushes are
based on the time. If the collector reads and flushes one file at a time, a FlushPolicy is not
necessary. Instead, set the FlushOnEOF configuration attribute to true.

Page 31 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

The FlushProcessor performs the actual flush, writing the NMEs from the aggregation tree to the
datastore. The FlushProcessor can execute a limited set of rules before it flushes each NME to the
datastore. For example, you could filter out NMEs from being written to the datastore using a
FilterRule.

FlushTesters are components that let you test each NME for certain conditions before flushing and
only flush those NMEs that meet the conditions.

For complete details on these components, see the eIUM Component Reference.

Aggregator Overview
Each Normalized Metered Event (NME) read by the encapsulator is passed to the aggregator, or rule
engine component. The aggregator implements your business logic, performing any of a wide
variety of possible operations on the NMEs and storing them in an aggregation tree in memory. How
the nodes or the branches of the tree are established depends on the particular match rules and
aggregation rules that you configure. The leaf nodes of each tree are typically aggregated NMEs
which are ready to be stored in the datastore.

The aggregator builds the tree in memory and periodically saves, or flushes, the aggregated NMEs
from the tree to the datastore. How frequently the aggregated NMEs are stored to the datastore
depends on the configurable FlushPolicy. When NMEs are stored, recovery information is also saved
in the datastore to provide for recovery in case the collector unexpectedly stops.

Aggregation Tree
The aggregation scheme is a set of rules that construct an aggregation tree from the inbound
NMEs. The match rules sort the NMEs according to fields in the NME and build the aggregation tree.
For example, rules can match on source IP address, destination IP address and destination port
number. For voice mediation, the rules might sort on MSISDN, IMSI, originating number and
destination number. A new node is added to the tree for every unique combination of these match
attributes.

The aggregation rule processes each NME and correlates NMEs, meaning combines data fields from
two different NMEs into one NME, or it aggregates NMEs, meaning it combines the same fields from
two NMEs into one. For example, the aggregation rule might keep a total of the number of bytes
transferred from the source IP address to the destination IP address. Or it might accumulate and
store the total duration a mobile phone subscriber is connected to the voice network. It also
typically stores the earliest start time and latest end time of all the data records.

Simple collectors consist of a single aggregation scheme that contains a single chain of rules. More
complex collectors can have two or more aggregation schemes. Correlators correlate, or combine,
session event records with usage event records. These are described below.

Multiple Aggregation Schemes


If you want to organize inbound NMEs into multiple aggregation trees, you can specify multiple
aggregation schemes in a single collector. In this case, each inbound NME will be processed by each
aggregation scheme, aggregated into a separate tree for each scheme, and stored in separate
tables or files in the datastore. Each aggregation scheme has its own aggregation tree. Each
aggregation tree is flushed to a separate table or file in the datastore because each may aggregate
different fields from the input NMEs. All aggregation trees are flushed at the same time.

Page 32 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

Each inbound NME is processed by each scheme sequentially. Each scheme receives the NME output
from the previous scheme. If a scheme modifies the NME, the following scheme receives the
modified NME.

An example of multiple schemes would be validating input records. The first scheme could check the
NME for valid values and mark the NME as valid or invalid. The second scheme could handle all of the
valid NMEs. The third scheme could handle all of the invalid NMEs.

Correlation Aggregation Schemes


Correlating usage events with session events is a special aggregation case. In this situation, a single
aggregation scheme (and aggregation tree) is manipulated by two different sets of rules in one rule
scheme. One set of rules sorts session NMEs and the second set of rules locates the appropriate
session in the tree for the inbound usage NMEs. See the correlators described in the eIUM Template
Reference for examples.

Configuring Aggregation Rules


A collectors aggregation scheme is defined in its configuration. The configuration for an aggregator
is structured as follows:

l Aggregator Each collector has exactly one aggregator object. The configuration of this
aggregator names one or more aggregation schemes to be used.
l Aggregation Scheme Each aggregation scheme lists a sequence of rules that control how
incoming NMEs are sorted and processed and how the aggregation tree is assembled. Each
collector can have one or more aggregation schemes.
l Rules Aggregation rules are the building blocks for constructing an aggregation scheme. The
rules control how the aggregation tree is constructed, and how NMEs are manipulated and
stored as they pass through this tree.

Datastore Overview
Each collector has a datastore that contains the collectors NME data and other information.
Depending on the type of datastore, applications can query the NMEs in the datastore. The
datastore also saves recovery information to enable the collector to gracefully recover if it stops
unexpectedly.

The two primary objectives of the datastore component are to:

l Provide persistent storage of all of the NMEs aggregated by the collector and to save the
collector's state information in case the collector stops unexpectedly. While the collector is
running, it periodically flushes (writes to the datastore) all of the NMEs in its aggregation trees in
memory. This periodic flush occurs at regular intervals, as defined by the FlushPolicy.
l Support queries from other collectors, the siuquery command, and external applications, such as
billing applications. These queries can obtain usage NMEs based on some query criteria.

Datastore Types
Two main datastore types are available: JDBCDatastore and FileJDBCDatastore. Other datastore
types are available, but they are specialized versions of one of these. The only difference between
the two main datastore models is that the FileJDBCDatastore uses files to store the NMEs in binary
format and the JDBCDatastore uses a database table. Using files to store NMEs provides a

Page 33 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

significant performance increase in the NME storage and retrieval process. See the below table for
a description of each datastore type.

Both datastore types use the database to store two tables: the history table and the recovery table.
These tables are described in "Database Tables" (on page 36).

You can use more than one datastore in a collector by using a MuxDatastore. The MuxDatastore is
simply a container of two or more datastores. You specify which output NMEs go to each datastore.

The available datastore types are:

Datastore Types
Datastore Description

JDBCDatastore This type stores all information in a database. NMEs and metadata related to
the stored NMEs (history table and recovery table) are both stored in a
database. A JDBC driver is used to store the data. eIUM uses the MySQL
database, if you have opted to use this product extension.

FileJDBCDatastore This type uses the underlying file system to store the actual NMEs. Metadata
related to these NMEs (history table and recovery table) is stored in the
database. When large volumes of NMEs are being stored, significant
performance advantages can be achieved with this datastore. The trade-off
is that arbitrary queries are not supported. Time range-based queries are
supported.

IDRJDBCDatastore This datastore is a variant of the FileJDBCDatastore that stores the NMEs in
IDR+ (Internet Data Record Plus) format in ASCII files. Five types of output are
supported, differing in the way the NME fields are delimited: by a character
(default is '|'), in HTML table format, in XML table format, in IPDR format and
by specifying a fixed width for each NME field. Use the IDRJDBCDatastore with
fixed width format when you want the collector to generate output in IDR
format.

When you store data in IDR+ format, the expectation is that you are producing
this format so that a non-eIUM application can process the NMEs. For
example, you may produce HTML so that you can easily read a set of NMEs or
you may produce comma-delimited output so that a spreadsheet can easily
import NMEs.

Because IDR+ format is tailored to reading by other processes, an eIUM


collector configured with this datastore type will not service queries. If you
want to produce sets of NMEs that can be queried by eIUM applications and
passed to non-eIUM applications, you should configure two datastores under
a MuxDatastore component. The first datastore would be queriable (for
example, FileJDBCDatastore) and the second datastore would store its results
as IDR+.

AccumulatingFile This is a special form of the FileJDBCDatastore. It can combine data over a
JDBCDatastore configured interval.

ExternalJDBC This is logically another form of IDR+. With ExternalJDBCDatastore, rather


Datastore than specifying ASCII files and how they are populated, you specify a database

Page 34 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

Datastore Description

table and how you want it to be populated. This differs from the basic
JDBCDatastore, which stores information to the database in a fixed format
that is convenient for eIUMs storing, aging and querying tasks, but is not
intended for direct query.

Like IDR+ datastores, this datastore does not support queries through eIUM.

ApplicationJDBC This is a container class that allows eIUM SDK developers to create new plug-
Datastore ins to store NMEs in whatever manner is desired. Examples of plug-ins may
include interactions with billing systems or other APIs. An eIUM collector can
directly store NMEs to a target application. In some situations, this may be
more convenient than using an API to create an eIUM application. Note that
using the Data Delivery Agent is usually a better alternative to using the
ApplicationJDBCDatastore. See "Understanding the Data Delivery Agent" (on
page 106) for more information.

NOTE:Session collectors must use either JDBCDatastore or FileJDBCDatastore, so other


collectors, typically correlators, can query them. See the eIUM Template Reference for examples of
session collectors.

Datastore Type Summary


The following table summarizes the eIUM datastore types:

Datastore Type Summary


Supports Supports
Complex Simple
Datastore Type Queries? Queries? Output Format

JDBCDatastore Yes Yes Database

FileJDBCDatastore No Yes Binary files

AccumulatingFileJDBCDatastore No Yes Binary files

IDRJDBCDatastore No No ASCII file formats: delimited files,


HTML, XML, IDR, IDR+

ExternalJDBCDatastore No No Database with the format defined


by the table and column
configurations.

ApplicationJDBCDatastore No No Determined by configured


application plug-in.

Changing Datastore Types


If you change a collector's datastore type, the data stored in the previous type will not be available
in the new collector. If you decide to change datastore types after you deploy a collector, you should
save any data from the collector that you want to preserve and then clean up the collector storage
prior to changing the configuration. You can clean up the collector storage in the Launchpad or with

Page 35 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

the siucleanup command. See the eIUM Administrators Guide for details on using the Launchpad.
See the eIUM Command Reference for details on the siucleanup command.

Database Tables
The datastore component can use the MySQL database (or an external database, such as Oracle) to
provide continuous storage of the NME data. The datastore component stores flushed NMEs in files
or in the database. The datastore also stores history and recovery information in the embedded
database. For each collector the datastore creates three types of tables:

l NME tables or NME files hold the flushed NME data. Multiple tables or files are created to
provide optimum performance and to enable aging of data. Whenever a flush occurs, the NMEs
are copied from the aggregation tree to a database table.
l A history table contains metadata of the NME database tables or files. The history table records
when each flush occurred and in which file or database table the flushed data resides. The
history table facilitates queries of the datastore. It is an internal table that has no external
configuration options.
l A recovery table stores a checkpoint of when the last flush occurred. It is used during recovery if
the collector stops unexpectedly. At each collector flush of the NME data, state information is
saved such as which input file the collector was reading and which line number it was reading.
This state information is also saved during normal collector shut down. Each time the collector
starts up, it starts processing NME data where it left off according to the state information.

Datastore Table Names and File Names


In the JDBCDatastore, the NME table name is a concatenation of the name of the collector, the
name of the aggregation scheme, and a unique number that is generated based on the tables
rolling algorithm, which is described in the "Table Aging and Table Rolling" (on page 37).

In the file-based datastores, the names of the files used to store the NMEs are defined by the
datastores configuration attributes. These file names can include a time stamp, the time zone, NME
data, the collector name, the aggregation scheme name, the input source file name or other
information.

The columns of the NME tables and the NME files are typically just the NME attribute names, defined
using the Attributes configuration attribute configured in the last rule which must be an
AggregationRule or StoreRule.

Table Names Example 1


For a collector named Voice01, which has one aggregation scheme named UsageScheme0 and uses
a FileJDBCDatastore (that is, NMEs are stored in binary files, not in the database), the following two
tables will be created in the database:

l VOICE01_RECOVERY is the recovery table, which is used to retain the collector state, if the
collector ever stops unexpectedly. This facilitates restarting the collector from the point at which
it stopped.
l VOICE01_USAGESCHEME0_HISTORY is the history table, which contains information about the
times of the NMEs. This table facilitates queries.

Page 36 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

Table Names Example 2


For a collector named Correlator1, which has two aggregation schemes named UsageScheme0 and
SessionScheme1, and uses a JDBCDatastore (that is, NMEs are stored in the database), the following
tables will be created in the database:

l CORRELATOR1_RECOVERY is the recovery table, which is used to retain the collector state, if the
collector ever stops unexpectedly. This facilitates restarting the collector from the point at which
it stopped.
l CORRELATOR1_USAGESCHEME0_HISTORY is the history table, which contains information about
the times of the NMEs from UsageScheme0. This table facilitates queries.
l CORRELATOR1_SESSIONSCHEME1_HISTORY is the history table, which contains information about
the times of the NMEs from SessionScheme1. This table facilitates queries.
l CORRELATOR1_USAGESCHEME0_1 is the first table that contains NMEs from UsageScheme0.
l When CORRELATOR1_USAGESCHEME0_1 is full, CORRELATOR1_USAGESCHEME0_2 is created.
Additional tables are created as needed. These tables are removed, according to the datastores
aging policy.
l CORRELATOR1_SESSIONSCHEME1_1 is the first table that contains NMEs from SessionScheme1.
l When CORRELATOR1_SESSIONSCHEME1_1 is full, CORRELATOR1_SESSIONSCHEME1_2 is created.
Additional tables are created as needed. These tables are removed, according to the datastores
aging policy.

Table Aging and Table Rolling


Table aging controls how much total NME data is stored in the datastore and when data is deleted
from the datastore. Table aging is controlled by the TableAgeLimit configuration attribute.

Table aging allows you to specify when to delete the NME tables or files, based on the TableAgeLimit
configuration attribute. This attribute defines the life span of the NME data that is kept in the
database. NMEs that have an end time attribute value older than that of the NMEs currently being
processed by the duration specified in TableAgeLimit, are removed from the database. All NMEs
must contain an EndTime NME attribute.

The TableAgeLimit is specified in days, hours, minutes, and seconds. For example, 07d00h00h00s
means 7 days, and 02d12h30m00s means 2 days, 12 hours, and 30 minutes. The number of digits
for each field is two or more and is padded with zeros, as in these examples.

A longer duration TableAgeLimit will cause NME data to be kept for a longer time, which also
increases the disk space used by NME data; but it provides a longer time span for the NME data, so
that eIUM applications can work with it. The default TableAgeLimit is 7 days.

NOTE: Be sure you back up any needed data before the aging period deletes the data from the
collector. See the eIUM Administrators Guide for details on backing up eIUM collectors and
components.

Table rolling determines how much data is stored in each table and when a new table is created.
Table rolling is controlled by the TableRollLimit configuration attribute. The TableRollLimit is
specified in days, hours, minutes, and seconds, as it is for the TableAgeLimit.

Page 37 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

The datastore uses multiple tables and files to store NMEs. The JDBCDatastore uses a table rolling
algorithm that is based on the time range of the NMEs. The same table is used to store the NMEs
from flush to flush. At collector startup and at each flush time, the JDBCDatastore decides whether
a new table needs to be created to store the NMEs.

Besides facilitating aging of the NME data, having multiple NME tables or files also improves the
query performance by allowing the datastore to query multiple tables or files with the optimum
amount of data, instead of querying a single table or file with a large amount of data.

Table rolling differs slightly in each of the two main datastore types.

l For the JDBCDatastore, the amount of data stored in each table is based on the TableRollLimit
configuration attribute. This attribute defines the duration of the NMEs that should be kept in
each NME data table. At collector startup and at each flush time, the datastore calculates the
difference between the current system time and the StartTime of the current table. If the
difference equals or exceeds the TableRollLimit configuration attribute, the datastore starts a
new table.
l For the FileJDBCDatastore, a new file is created to hold the next set of NMEs whenever a flush
occurs.

Page 38 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 2: What is a Collector?

Page 39 of 332 HP eIUM (8.0)


Chapter 3

Designing an eIUM Deployment


Designing an HP Internet Usage Manager (eIUM) deployment can be a complex and involved process
depending on:

l What Business Applications eIUM provides output data to.


l What Input Data Sources provide input data to eIUM.
l What Processing or Business Logic eIUM performs on the data.
This topic series provides some guidelines on how to design an eIUM deployment.

Sample Deployment 40

Determine the Output Applications and Data Requirements 41

Determine the Input Data Sources 41

Determine the Data Processing 42

Design a Real-Time Prepaid Charging Solution 42

Sample Deployment
The following diagram shows a sample deployment labeled with the three major areas of focus for
an eIUM deployment design process:

l Input Data Sources - This example shows two input data sources, Event Logs from a network
switch device, for example a voice network switch, and a session information source which
contains information about subscribers.
l Data Processing with Business Rules - Collectors read and process the input data. This example
shows four collectors:
n A Leaf Collector reads data from the switch, performs data validation and writes invalid event

records to a separate file.


n A Reprocessing Collector reads corrected event records.
n A Correlator correlates or combines records from the Leaf Collector, Reprocessing Collector
and the session source.
n An Output Generator Collector formats the output records for the business application to
consume.
l Output Business Information sent to Business Applications - Formatted business information is
sent to one or more business applications. In this case the Output Generator Collector generates
files that are read by a business application such as a Billing System.

Page 40 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 3: Designing an eIUM Deployment

The following topics describe an approach to designing the collectors that make up an eIUM
deployment that implements a mediation system.

Determine the Output Applications and Data Requirements


1. Determine what applications will receive the data from eIUM. Many possible business
applications can consume data from eIUM, such as billing applications, rating engines, business
intelligence, service activation and provisioning, customer care, data warehousing, fraud
detection and many other business support systems.
2. Determine the specific data fields required by the output applications. For example, time
duration of sessions, subscriber and account details, specific services used, names and types of
files downloaded, phone numbers accessed, web sites and IP addresses accessed, number of
bytes downloaded, messages sent and any other specific information.
3. Determine the data format required by the output applications. For example, text files, binary
files, a database, or other formats.
4. Determine how to transfer the data to the output applications. For example, FTP, local files, a
database, or other transfer methods.

Determine the Input Data Sources


1. Determine where the input data comes from. For example, from voice switches, IP data
switches, application log files, databases, LDAP directories or other sources.
2. Determine how to transfer the input data to eIUM. For example, from local files, over FTP, FTAM,
GTP, querying a database or querying an LDAP directory.
3. Determine the specific data fields available in the input data. For example, file names,
subscriber identifiers like account numbers and user login names, phone numbers, IP addresses
and switches traversed, session durations, services used, and other information.
4. If you plan to detect and correct invalid data, determine where you will examine the data, how
you will detect errors and how you will reprocess the corrected data.
5. Design the level one collectors to read the input data and generate NMEs.

Page 41 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 3: Designing an eIUM Deployment

Determine the Data Processing


Once you know the required output data and the available input data you must determine how the
input data must be transformed into the data required by the consuming application.

1. Map the input data fields to the required output data fields.
2. Determine what transformations need to happen to the input data. For example, summing data
such as the number of minutes connected to the network, the number of bytes downloaded,
the cost of each transaction, or correlating usage data and session data.
3. Determine what data must be validated. If not already designed, determine how you will
examine, detect and handle invalid data. For example, discard invalid data, park invalid data in
separate files for later manual review, correct and reprocess the repaired data.
4. Partition the data transformations among collectors and among rule chains in each collector.
5. Design the collectors and rule chains that transform the input data to the appropriate output
data and formats and handle data validation and reprocessing.
6. Build each collector and test with sample data. Build the next collector in the chain and test. Use
actual data for sizing and performance testing.

Design a Real-Time Prepaid Charging Solution


If you provide prepaid services and you need real-time authorization and charging, you can use the
eIUM Real-Time Charging Manager to provide real-time service authorization. This section provides
guidelines on designing a real-time charging solution. See also the eIUMReal-time Guide.

1. Determine the services that require authorization. For example, mobile phone access, wireless
network access or premium services.
2. Determine the protocol the services use for authorization. For example, Diameter or RADIUS.
3. Determine the type of requests or messages that will be received by the charging manager. For
example, network connect and disconnect requests, session start, continue and stop messages,
or service authorization requests.
4. Determine what other applications need to be queried for information to decide whether a
particular subscriber is authorized to use a particular service. For example, a user repository, a
balance manager, a rating engine, or a database.
5. Design the rule chains that will receive each type of incoming request or message, the
processing that needs to take place, and the response that needs to be constructed and sent
back to the client.
6. Build and test a session server and related components. See the eIUM Real-time Guide for more
information.

Page 42 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 3: Designing an eIUM Deployment

Page 43 of 332 HP eIUM (8.0)


Chapter 4

Configuring eIUM
eIUM is a component-based system that includes many factory-supplied components for collecting,
processing (aggregation, correlation, adornment, lookups, and so forth), storing, and delivering
data. Its flexible component architecture allows eIUM to support service infrastructures with a
variety of data sources, business processing rules, and downstream applications. You specify the
components in your deployment and the components behavior through configuration. eIUM enables
quicker configuration by providing templates and wizards for many input data sources and eIUM
components. This following topics show you how to configure eIUM using these tools.

Configuration and the eIUM Deployment Hierarchy 44

The Complete Configuration Hierarchy 46

Collector Configuration Nodes 48

Configuration Attributes 49

The ClassName Attribute 50

Sample Configuration 50

Modifying the Configuration 52

Modifying the Configuration using Launchpad 52

Modifying the Configuration from the Command Line 52

Modifying the Configuration with configmanager 53

Linked Collectors 53

Unlinked Collectors 53

Linked Collectors 54

Advantages of Linked Collectors 54

Creating Linked Collectors 54

NMEs and the NME Schema 56

NME Attribute Types 57

Add NME Attributes to the NME Schema 57

Structured NMEs and the Structured NME Schema 57

Configuration and the eIUM Deployment Hierarchy


The behavior of every eIUM collector or server is determined by its configuration located in a central
configuration store controlled by the configuration server. Configuration information is stored
hierarchically, in the form of a tree similar to file system directory structures. Every eIUM
component or server is represented by a node in the configuration tree.

Page 44 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

The /deployment configuration node contains the main configuration for eIUM collectors and
servers. To view the deployment hierarchy run the Launchpad tool.

1. On Windows, select Start -> Programs -> Internet Usage Manager -> SIU -> Launchpad.
2. On UNIX, execute the /opt/bin/SIU/launchpad command.
The Deployment tree is displayed in the left pane of the Launchpad as shown below. See the eIUM
Administrators Guide for more information on the Launchpad tool.

Under the /deployment configuration node, each host on which eIUM is installed is represented by a
host node [/deployment/hostname]. Under each host node, each collector running on that host is
represented by a collector node [/deployment/hostname/collector-name]. Other servers running on
the host, such as the ConfigServer which manages the configuration, WebAppServer (Web
Application Server) which generates reports from collectors, the PolicyServer used in dynamic SNMP
data collection, and the JBossServer used in consolidated logging are represented by corresponding
subnodes.

The configuration hierarchy in the above figure, for example, contains one sub-node, Host01,
representing the only host on which eIUM is installed in this deployment. On this host are several
different collectors (Correlator, FC01, FC02, Reporting, SessionCollector, UsageCollector, VoiceA and
VoiceB), a file collection service (FCS), a file distribution service (FDS) and three servers
(ConfigServer, JBossServer and ReportServer), each represented by a subnode under the host
subnode, Host01.

NOTE: Collector and server names must be unique across your entire deployment, even across
different hosts. These names must not start with a number and must not contain dashes or

Page 45 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

spaces.

Do not name any collector security because it is a reserved name.

If you are using an Oracle database, the sum of the number of characters in the collector name
and scheme name must not exceed 20 characters.

The Complete Configuration Hierarchy


The /deployment node of the configuration tree is only one node in the entire tree. To view the
entire configuration hierarchy of your eIUM deployment use the Deployment Editor. To run the
Deployment Editor, follow these steps:

Start the Launchpad.

Select Tools -> Deployment Editor.

This displays the Deployment Editor which shows the entire configuration hierarchy.

The configuration tree organizes configuration items like the folder or directory hierarchy organizes
files in the Windows or UNIX file system. Each node in the configuration tree stores the
configuration entries for a single eIUM component or process. The following list describes each top
level configuration node in detail.

Page 46 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Deployment Editor Configuration Nodes


Node Description

CollectionPolicy Contains configurations used by the Policy Server to collect data from
dynamic SNMP network elements.

collectors Contains the CORBA IOR addresses for each collector in the deployment. If
security is enabled, this node also contains the secure CORBA addresses for
each collector.

components Contains the configurations of all the preconfigured encapsulators,


aggregation schemes, and datastores that are shipped with eIUM. This
node also contains a list of required elements for the various types of
collectors.

ConfigServer Represents the Configuration Server, which manages the configuration


store by responding to administrative requests and queries. A collector or a
server queries the configuration server to obtain its configuration at
startup. Other applications query the configuration server to get the IOR of
the object with which they want to communicate.

In CORBA, every object has a unique Interoperable Object Reference (IOR)


that can be used to locate the object. The IOR contains information such as
the host on which the object resides, the TCP/IP port on which the object is
listening for requests, and an object key that distinguishes the object from
other objects listening on that port.

NOTE: Every deployment has a single configuration server.

deployment Represents the deployment hierarchy, which contains configurations of all


the hosts and the collectors and servers running on those hosts. See
"Configuration and the eIUM Deployment Hierarchy" (on page 44) for more
details.

export Contains information for external processes such as logging for OpenView
Operations.

hints Contains information that Launchpad uses to validate and display


configurations.

License Contains your eIUM license information.

ManagementServers Represents the Management Server. For more information, see the
com.hp.usage.managementservice.ManagementService component in the
eIUMComponent Reference.

NMESchema Contains the list of all traditional NME attributes. Collectors process usage
data by converting raw usage or session data into Normalized Metered
Events, or NMEs. An NME is an eIUM record whose fields correspond to the
fields in the input record for a usage event. Each field of data in an NME is
an NME attribute. eIUM components process NMEs. See "NMEs and the NME
Schema" (on page 56). See also SNMESchema below.

PolicyServer Contains the configuration of the Policy Server, which collects data from
dynamic SNMP network elements.

Page 47 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Node Description

SNMESchema Contains the list of all structured NME attributes and types. Collectors
process usage data by converting raw usage or session data into
Normalized Metered Events, or NMEs. An NME is an eIUM record whose
fields correspond to the fields in the input record for a usage event. Each
field of data in an NME is an NME attribute. eIUM components process
NMEs. See "Using Structured NMEs" (on page 118) for more information.

templates Contains configurations of collectors, session servers and other


components designed for specific data sources and business purposes. See
the eIUM Template Reference for details.

webApplications Contains configurations used by eIUM web-based applications such as


Reporting, Audit Reports, Operations Console.

Collector Configuration Nodes


Each collectors configuration is under the host node for the host system on which the collector
runs. The configuration node is the collectors name. For example, the configurations for the
collector named Voice25 running on the eIUM host named SYS10 would reside at the configuration
node [/deployment/SYS10/Voice25].

Every collector node contains the configurations for all the components that comprise that
collector. Under the collector node, each collector has an Encapsulator node, an Aggregator node
and a Datastore node. Each of these nodes contains configuration attributes and configuration
subnodes.

[/deployment/SYS10/Voice25]
# Configurations for the collector named Voice25 running on SYS10.

[/deployment/SYS10/Voice25/Encapsulator]
# Configurations for this collectors encapsulator.

[/deployment/SYS10/Voice25/Aggregator]
# Configurations for this collectors Aggregator

[/deployment/SYS10/Voice25/Datastore]
# Configurations for this collectors Datastore

In the following figure, for example, the deployment contains a host called Host01 running a
collector called SessionCollector. The Encapsulator node of this collector is expanded to show the
Encapsulator configuration attributes and the FileRollPolicy, FlushPolicy and Parser subnodes. The
Parser subnode is expanded to show its configuration attributes.

Page 48 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Configuration Attributes
Each node of the configuration tree can contain configuration attributes, which are Name=Value
pairs that define the behavior of that particular component. For example, in the figure in "Collector
Configuration Nodes" (on page 48), the value of the UpdateTime configuration attribute is true. This
attribute is part of the Encapsulator subnode. All configuration attributes for each component are
described in the eIUM Component Reference (click the "Attributes" index link, where you can view all
Attributes alphabetically in the eIUM Component Reference).

A configuration attribute can be either single-valued or multi-valued. A single-valued attribute can


occur only once, while a multi-valued attribute can occur multiple times. Each value is an ASCII string.

For example, the Encapsulator configuration in the figure in "Collector Configuration Nodes" (on
page 48) shows the single-valued attribute FieldDelimiter with the value |. This attribute is under
the Parser subnode. The configuration attribute named Attributes is a multi-valued attribute with
five values:

EndTime
SrcIPStart
SrcIPEnd
LoginID
AcctNum
AcctStatusType

Page 49 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Each value of a multi-valued attribute is represented visually in the Deployment Editor as a child of
the attribute name.

The ClassName Attribute


The most important configuration attribute is the ClassName attribute, because the value of the
ClassName attribute determines which eIUM component will operate as part of the collector.

For example, the following shows part of the Encapsulator node from "Collector Configuration
Nodes" (on page 48) with the ClassName attribute set to LogFileEncapsulator.

[/deployment/Host01/SessionCollector/Encapsulator]
ClassName=LogFileEncapsulator
Description=Processes event records from a file-based or record-based
data source.
UpdateTime=true

All components are described in detail in the eIUM Component Reference, which is organized by the
component name as used in the ClassName attribute. Components are also organized by categories,
attributes, and packages in the eIUM Component Reference. Each component in the eIUM Component
Reference also describes all its configuration attributes. Each configuration attribute provides a way
to customize the behavior of the component. See "Configuration Attributes" (on page 49).

Sample Configuration
Part of the configuration for a collector named DemoCollector01 is shown below. Notice that it is in
the configuration node [/deployment/Host01/DemoCollector01] meaning that this collector will run
on the eIUM system named Host01. The next level collector configuration nodes correspond to the
primary collector components Encapsulator, Aggregator and Datastore. The order of these nodes in
a text file (as produced by the saveconfig command) is irrelevant. What is relevant and what
determines how a collector behaves are the node names and the names of the configuration
attributes following each configuration node name, in particular the ClassName attribute as
described above. See also "Collector Configuration Nodes" (on page 48) and "Configuration
Attributes" (on page 49).

[/deployment/Host01/DemoCollector01]
AdminInterface=
ClassName=com.hp.siu.adminagent.procmgr.CollectorProcess
Description=Demo collector.
QueryInterface=

[/deployment/Host01/DemoCollector01/Aggregator]
ClassName=Aggregator
Description=Aggregator consists of a single IPUsage aggregation scheme
SchemeNames=IPUsage

[/deployment/Host01/DemoCollector01/Datastore]
ClassName=FileJDBCDatastore
Database=jdbc:mysql://localhost:3306/siu20
DatabaseClass=com.mysql.jdbc.Driver
Description=Stores event records in binary format to a file
Password=siu20
TableAgeLimit=7d0h0m0s
TableRollLimit=1d0h0m0sUser=siu20

Page 50 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

[/deployment/Host01/DemoCollector01/Encapsulator]
ClassName=DemoEncapsulator
Description=Generates a stream of pseudo-random NMEs, useful for
testing purposes
JitterTime=11
MaxNMERate=10
NMEFields=StartTime,,0,20,*10
NMEFields=EndTime,,20,30,*10
NMEFields=SrcIP,,15.0.0.0,15.0.0.19
NMEFields=DstIP,, 17.0.0.0, 17.0.0.9
NMEFields=NumBytes64,800,100,5000,+100

[/deployment/Host01/DemoCollector01/Encapsulator/FlushPolicy]
ClassName=TimeFlushPolicy
ClockInterval=15m
Description=Flushes 15 minute worth of event records to the datastore

The following example shows the complete Aggregator configuration entry for this collector. This
example shows the collectors only aggregation scheme or rule scheme. Collectors can have one or
more rule schemes. Notice that the all configurations are under the Aggregator node. The
aggregator's rule scheme contains three attributes: ClassName, Description and SchemeNames.
The ClassName attribute specifies which eIUM component is operating: the Aggregator component.
The SchemeNames attribute names all the rule schemes, in this case just one: IPUsage. IPUsage is
another configuration subnode under the Aggregator node.

The IPUsage node uses an AggregationScheme component and specifies two rules in the rule
scheme with the RuleNames attribute. The RuleNames attribute is a multi-valued attribute. That is,
it has two attribute values: GroupNMEs and Aggregate. These attributes define the configuration
subnodes where the actual rules to be executed are defined. It is important to realize that the order
of the RuleNames attributes specifies the order of execution of the rules, not the order of the
information in the text file. See the AggregationScheme component in the eIUM Component
Reference for complete details on this and all components and their configuration attributes.

[/deployment/Host01/DemoCollector01/Aggregator]
ClassName=Aggregator
Description=Aggregator consists of a single IPUsage aggregation scheme
SchemeNames=IPUsage

[/deployment/Host01/DemoCollector01/Aggregator/IPUsage]
ClassName=AggregationScheme
Description=Aggregates usage event records grouped by Source and
Destination IP
RuleNames=GroupNMEs
RuleNames=Aggregate

[/deployment/Host01/DemoCollector01/Aggregator/IPUsage/Aggregate]
ClassName=AggregationRule
Description=Retains minimum StartTime and maximum Endtime and adds
usage attributes
Attributes=SrcIP
Attributes=DstIP
Attributes=NumBytes64,add

Page 51 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Attributes=StartTime,min
Attributes=EndTime,max

[/deployment/Host01/DemoCollector01/Aggregator/IPUsage/GroupNMEs]
ClassName=HashMatchRule
Description=Groups event records by Source and Destination IP
MatchKeyNames=SrcIP
MatchKeyNames=DstIP

For the complete configuration of this collector you can create a collector using the collector
template Demo - Simple using the Launchpad. After you create the collector you can view and edit
its configuration in the Launchpad or you can save its configuration in a text file using the File ->
Export Configurations menu item. Or you can view the template for the Demo - Simple collector in
the Launchpad.

Modifying the Configuration


You can modify the configuration in two ways:

l Use eIUM Launchpad.


l Use the saveconfig and loadconfig commands and a text editor.
l Using the configmanager utility.

Modifying the Configuration using Launchpad


The eIUM Launchpad enables you to modify the configuration of a component quickly and
accurately. Refer to the eIUM Administrators Guide or the Launchpad online help for instructions on
using the Launchpad.

When you modify the configuration of a collector or process, the changes are written to the
configuration store but the collector is not aware of these changes. You must restart the collector
so the collector reads and uses the new configuration.

Modifying the Configuration from the Command Line


Although the Launchpad is the recommended way to modify your configuration, you can also use the
loadconfig and saveconfig commands. The saveconfig command copies the configuration
information into a text file that you can then view and edit with a text editor. The loadconfig
command copies the configuration information from a text file into the configuration store. See the
eIUM Command Reference for instructions on using the loadconfig and saveconfig commands. See
"Sample Configuration" (on page 50) for example output of the saveconfig command.

NOTE: When loading configurations with the Launchpad or the loadconfig command, make sure
you load valid configurations as the configurations are simply copied into the configuration
server.

The text file created by saveconfig contains two types of elements:

l Configuration Nodes - Represents a location in the configuration hierarchy. The entry is wrapped
in square brackets and shows the path to the configuration node. For example, below is the
configuration entry for the Encapsulator node of the collector named Collector1 on the eIUM
host named host01. This entry contains the configuration attributes that determine the

Page 52 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

behavior of this collectors Encapsulator.


[/deployment/host01/Collector1/Encapsulator]

l Configuration Attributes - Represents a Name=Value pair that determines the behavior of that
particular component. Each configuration attribute has the following format:
<attribute name>=<attribute value>

For example, the following attribute represents a configuration attribute named InitialInputFile
that has a value of fixedip11092000.log.
InitialInputFile=fixedip11092000.log

A configuration attribute can be either single-valued or multi-valued. A single-valued attribute


can occur only once, while a multi-valued attribute can occur multiple times, where each line has
the same attribute name with a different value.

NOTE: Configuration nodes and configuration attribute names are case sensitive.

Modifying the Configuration with configmanager


You can use the eIUM Console-based configmanager command load and save configurations from
the configuration server, and then find the changes made to a configuration file versus the time it
was last saved. The utility allows you to track changes to your configuration file and avoid
overwriting other eIUM users' changes on the configuration server. You can also use script files to
get and make point changes on the configuration server, such as obtaining and setting the value of
a configuration attribute (which can be useful when the configuration tree is too large and can take
too long to load in the Launchpad for editing). See the eIUM Administrator's Guide and the
eIUMCommand Reference for more information on the configmanager utility.

Linked Collectors
Linked collectors are collectors that share a common configuration. Linked collectors greatly
simplify the creation and management of multiple collectors that have nearly identical
configurations.

Unlinked Collectors
Linked collectors are best described by contrasting them with ordinary collectors or unlinked
collectors. In unlinked collectors, each collector specifies all of its own configurations. When you
need many similar collectors, you must duplicate all the common configurations, changing only
those configurations that are unique for each individual collector. The following diagram represents
four nearly identical collectors. They only differ in the name of the collector (as specified by the
configuration node) and the IPAddress attribute.

Page 53 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Linked Collectors
When you have multiple nearly identical collectors, you can share configurations by using linked
collectors. Linked collectors link to a master template collector and obtain their configurations from
the master template. Each individual collector can override any of the configurations from the
master. The master defines all the common configuration attributes. Each actual collector uses a
Link attribute that refers to the master template configuration.

The example below defines a master template collector at the configuration node
[/template/custom/DemoMaster1]. Each individual collector has a Link attribute that refers the
master. Then each individual collector defines only the IPAddress attribute, which overrides the
attribute value copied from the master template.

Advantages of Linked Collectors


l Less configurations to manage. All the common configurations are kept in one place.
l Simple to change all linked collectors -- Just change the master and all collectors linked to the
master are changed. Restart the collectors to make the change take effect.
l Faster creation of similar collectors.

Creating Linked Collectors


To create a linked collector, first create a master collector template, and then create a linked
collector that links to the master template. The detailed steps are described in the following topics.

Create a Master Collector Template


1. Create a new collector if you dont already have one. (In the Launchpad, select the host and click
the New Collector button.)
2. Select an existing collector in the LaunchPad.
3. Select the menu item Actions: Save as Template. Or right click and select the menu item Save
as Template.

Page 54 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

4. Provide a name for the template and click OK. This creates a new subnode in the configuration
hierarchy /templates/custom and places the new template there.
For example, if you created a master collector template named MasterVoice01, the
configurations for this collector would be at the following configuration node:
[/template/custom/MasterVoice01]

Creating a Linked Collector from the Master Template


Once you have a master collector template, create a linked collector that uses the master template
using the steps below.

1. Select the menu item Tools ->Deployment Editor. This brings up a window showing the entire
configuration hierarchy.
2. Double click the deployment node to open it and display the eIUM hosts.
3. Select the host where you want the collector to run.
4. Right click on the host and select the menu item Add Sub-node.
5. Type the name of the collector you want to create.
6. Select the new node you just created.
7. Right click on the node and select the menu item Add Attribute.
8. Type in the attribute name Link and click OK.
9. Type in the configuration path to the template: /templates/custom/<template name>.
For example, if you named the template VoiceMaster, you would type in
/templates/custom/VoiceMaster.
10. Click OK in the Deployment Editor window.
The new collector is displayed in the Launchpad window. You can perform all operations on
linked collectors the same as any other collector.

Creating Additional Linked Collectors


Once you have created one linked collector, you can quickly create more linked collectors with the
Deployment Editor by copying the first collector, pasting it under the appropriate host node and
renaming the node.

Alternatively you could create a template of the linked collectors using the Actions -> Save as
Template menu item and create additional linked collectors from the template.

Example Linked Collector


The example below shows a linked collector named VoiceA that is linked to a master collector
named MasterVoice01.

[/deployment/hostname/VoiceA]
Description= Linked collector reads CDRs from voice switch A.
Link=/templates/custom/MasterVoice01

Page 55 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

The example below shows part of the configuration for the master collector named MasterVoice01.
Note that this configuration is the complete configuration for a collector. The only difference is that
it appears under the /templates/custom configuration node rather than under a
/deployment/hostname node.

[/templates/custom/MasterVoice01]
AdminInterface=
ClassName=com.hp.siu.adminagent.procmgr.CollectorProcess
Description=Master collector for multiple linked collectors.
QueryInterface=

[/templates/custom/MasterVoice01/Aggregator]
ClassName=Aggregator
Description=Aggregator consists of a single aggregation scheme.
SchemeNames=ValidateCDRs
SchemeNames=FilterCDRs
SchemeNames=RateCDRs
SchemeNames=AggregateCDRs
...

NMEs and the NME Schema


Collectors process usage information by first converting raw usage or session data into a
Normalized Metered Event (NME). An NME is a record that is composed of fields that correspond to
the various fields in a record of some event. Each field of data in an NME is an NME attribute. Each
NME attribute must have a name and a type. The type depends on what kind of data is being
processed. The following diagram shows a sample NME. The NME attributes are named in the top
row and the data types are in the bottom row:

As data is read in by a collectors encapsulator, the parser extracts each data field and places the
data value into the appropriate NME attribute in the NME. The following diagram shows a sample
NME with data values:

NOTE: eIUM supports traditional NMEs, which are flat data structures, and structured NMEs which
are hierarchical data structures. This section describes traditional flat NMEs. For details on
structured NMEs, see "Using Structured NMEs" (on page 118).

The NME attributes that compose the NME for a particular collector are usually defined in the parser
component. See the parsers described in the eIUM Component Reference for complete details.

The NME Schema defines all the NME attributes you can use in NMEs. That is, the NME schema
defines the names of all NME attributes and assigns a type to each name. Every NME attribute must
be defined in the NME schema before it can be used in any collector or component. For example, the
following shows a few NME attributes defined by the NME schema:

[/NMESchema]
Attributes=AccountCode,com.hp.siu.utils.StringAttribute
Attributes=AccountingDataAge,com.hp.siu.utils.IntegerAttribute

Page 56 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Attributes=AcctNum,com.hp.siu.utils.StringAttribute
Attributes=AcctStatusType,com.hp.siu.utils.StringAttribute
Attributes=ActiveTime,com.hp.siu.utils.IntegerAttribute
Attributes=AdditionalCallIndicators,com.hp.siu.utils.BinaryAttribute
Attributes=AdditionalDigitsDialed,com.hp.siu.utils.StringAttribute
...

NME Attribute Types


The following table lists all the data types you can use when defining NME attributes in the NME
schema.

NME Attributes Types Used in the NME Schema


NME Field Type Description

StringAttribute ASCII text data

UnicodeStringAttribute Unicode text data

IntegerAttribute 32-bit signed integer

IPAddrAttribute IP address

TimeAttribute Date and time

LongAttribute 64-bit signed integer

FloatAttribute 32-bit single precision floating point number

DoubleAttribute 64-bit double precision floating point number

BinaryAttribute Byte array of binary data

UUIDAttribute Universal Unique Identifier, a 16-byte binary array structure, which can be
used to uniquely tag a set of NMEs across time or systems. Useful for the
IPDR standard and data auditing, it is based on the BinaryAttribute. For
more information on UUIDs, see http://www.ipdr.org.

Add NME Attributes to the NME Schema


You can add your own NME attributes to the NME schema any of the following three ways:

l In the Launchpad Tools -> NME Schema Editor to run the Schema Editor and use the Add button
to add new NME attributes.
l Use the Launchpad Tools -> Deployment Editor menu item to run the Deployment Editor and add
attributes under the /NMESchema node.
l Create a text file like the one shown above and use the loadconfig command to introduce the
new NME attributes into the schema.

Structured NMEs and the Structured NME Schema


Structured NMEs are a substantial improvement over traditional flat NMEs.

Page 57 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

l Structured NMEs store complete information about the hierarchical arrangement of complex
data records.
l Structured NMEs contain information about which data fields have been set and which have not.
l Structured NMEs can handle optional data fields.
Structured NMEs retain all the hierarchical information of structured data. For more information on
structured NMEs and the structured NME schema, see "Using Structured NMEs" (on page 118).

Page 58 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 4: Configuring eIUM

Page 59 of 332 HP eIUM (8.0)


Chapter 5

Detecting and Handling Errors in CDRs


A mediation system reads and processes usage records or Call Detail Records (CDRs) generated by
IP and voice switches. Occasionally CDRs can have invalid values in some of their attributes, typically
due to certain temporary malfunctions in the switch or due to corruption of the storage medium
that is used to store the CDR.

One of the primary capabilities of a mediation system is to provide an efficient and effective error
handling mechanism to ensure minimum revenue leakage and proper auditing of every transaction
which flows through the network. Any errors in mediation should be handled gracefully so that these
errors can be caught early in the processing chain and corrected.

The eIUM error handling solution provides a way to segregate CDRs that contain errors from valid
CDRs. The error CDRs are stored in separate files of NMEs by the collector. These files can be
manually edited and corrected using the CDR Editor tool. After correcting the error CDRs, they can
be reprocessed by a collector designed for this purpose, thereby reintroducing the corrected NMEs
back into the meditation processing chain. The eIUM error handling solution provides a robust and
complete feature set for detecting, correcting and reprocessing error CDRs.

The following series of topics is organized as follows:

l "Overview of Detecting Errors and Handling Errors" (on page 60) describes the type of errors that
can occur and how eIUM collectors can detect and handle them.
l "Solution Architecture" (on page 62) provides an overview of one eIUM error processing solution
that lets you detect and correct parse errors and validation errors.
l "Enabling Error Detection and Parking" (on page 63) describes how to enable the error
processing solution.
l "Detecting Validation Errors Using PostParsedRules" (on page 65) describes how you configure
rules to check your data for unexpected values.
l "Parking Error CDRs" (on page 66) explains how NMEs with parse errors or validation errors are
stored by eIUM.
l "Correcting Error CDRs with the CDR Editor" (on page 70) gives an overview of the CDR Editors
capabilities. For complete details see the eIUM Administrators Guide.
l "Reprocessing Corrected CDRs" (on page 70) describes how to reintroduce the corrected CDRs
into the eIUM mediation system.
l "Parsers that Support Error Handling" (on page 72) lists eIUM parsers that support error handling
with this solution.

Overview of Detecting Errors and Handling Errors


This section describes how eIUM detects errors and handles errors. eIUM encapsulators and parsers
read data records, parse the fields in the data records and place the data values into NME
attributes, as specified by the configuration. During this process of reading and parsing data
records, eIUM components automatically detect invalid records and automatically detect invalid

Page 60 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

data fields in the records. You can optionally configure eIUM components to check the data values in
data fields and detect validation errors. eIUM does not automatically detect validation errors. The
three types of errors eIUM can detect are:

l Record Error: If the data is not formatted correctly as a record, then this is a record error. This
kind of error occurs if the collector is unable to read a record correctly or is unable to find the
length, type or other required information of the record, or the data is not well formed as a
record.
l Parse Error: If the data in a particular field in a record cannot be extracted correctly then the
error is a parse error.
l Validation Error: After the data in a field is parsed, if the value is not within the valid range as
defined in the configuration, a validation error is reported.
Once an error is detected, it can be handled in any of several different ways. For example, the error
record can be discarded. Or it can be parked for later manual inspection and repair. Or the error
field can be marked for later manual inspection and repair. The following table summarizes the
types of errors that can be detected, the default response, other possible responses and how you
configure each response. These are described in detail in the rest of this topic series.

Summary of Error Detection and Handling Options


Optional How to Configure Optional
Type of Error Default Response Responses Responses

Record error - The record Log the error and stop Log the error. Use the
could not be detected. the collector. Stop the RecordEncapsulators
collector. CleanupOnRecordError,
Drop the LogOnRecordError,
record. PurgeOnRecordError or
Park the EndOnRecordError
record. attributes.
Drop the file. Specify cleanup handlers.
Park the file.

Parse error - A field in the Log the error and Log the error. Use the
record could not be parsed. continue processing Stop the RecordEncapsulators
the record. collector. EndOnParseError,
Drop the PurgeOnParseError,
record. PauseOnParseError,
Park the CleanupOnParseError or
record. DeferCleanupOnParseError
Specify a attributes.
default value. Specify cleanup handlers.
Drop the file. Use the
Park the file. RecordEncapsulators
Mark the field EnableErrorParking
for the CDR attribute and the CDR
editor. Editor.

Validation error - A field in the None. Mark the field Use the
record contains an unexpected for the CDR RecordEncapsulators
value. Editor. EnableErrorParking
attribute and the CDR

Page 61 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Optional How to Configure Optional


Type of Error Default Response Responses Responses

Editor.

The remainder of this documentation describes how to:

1. Detect parse errors and validation errors.


2. Mark each parse error and validation error so they can be manually corrected with the CDR
editor.
3. Correct CDRs with the CDR editor. See the HP eIUM Administrators Guide for details on the CDR
editor.
4. Reprocess corrected CDRs.
This error handling solution does not directly address record errors. The basic assumption of this
solution is that the incoming data is well formed as records, but the field values may not be well
formed.

Solution Architecture
The eIUM error handling solution detects and parks error CDRs in separate files. You can manually
correct the error CDRs with the CDR editor. The corrected CDRs can be reprocessed with separate
collectors. The following diagram shows the main components of the error handling solution:

l The Primary Collector detects error CDRs and segregates them from valid CDRs.
l The CDR Editor which you can use to manually correct error CDRs.
l The Corrected NME collector reads the corrected CDRs and processes them.
l The DatasetMux collector combines the valid and corrected CDRs and makes them available to
other collectors or to business applications.

Page 62 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

The figure below shows the error handling components in more detail. These components are:

1. Encapsulator and rules for detecting errors in CDRs: The parser and the post-parsed rules are
the primary components that detect parse errors and validation errors. These components also
update internal marker attributes in the NME.
2. Error scheme and datastore for parking error CDRs: The error CDRs are converted to an NME
along with internal marker attributes. These NMEs are processed by a set of aggregation
schemes designated as error schemes and are subsequently stored in the internal NME format
in a datastore (FileJDBCDatastore).
3. CDR Editor for correcting error CDRs: Use the CDR Editor utility to manually correct error
CDRs.
4. Collector for reprocessing corrected CDRs: Use a collector to process the corrected CDRs. This
collector is configured to read and process the corrected CDRs.
The figure below illustrates the data flow through the system.

Enabling Error Detection and Parking


The parser component of the encapsulator is configured to extract data from each field in the CDR
and set the value of each NME attribute. A parse error occurs when incoming CDR data fields cannot
be parsed and thus no valid value can be placed into the corresponding NME attribute. With this
solution enabled, when the parser detects a parse error, the NME is marked as an error NME, the

Page 63 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

internal error attributes are updated and the parser continues parsing the next available data in the
CDR.

You enable a collector to handle parse errors by setting the value of the Boolean configuration
attribute EnableErrorParking in the encapsulator. By default its value is false, and hence error
detection is not enabled. To turn this feature on set the value of this attribute to true.

Syntax
EnableErrorParking=<Boolean value>

where the Boolean value can be either true or false.

Example
The following sample configuration shows a RecordEncapsulator with error handling enabled.

[/deployment/host/collector/Encapsulator]
ClassName=RecordEncapsulator
EnableErrorParking=true

After parsing, the encapsulator sends the NME to PostParsedRules (discussed in the next section)
for data validation and then to the aggregation schemes for handling.

NOTE: When you enable error parking, the collector adds the following five additional attributes
to each NME. These are internal attributes and they are created and used by the collector only
when error detection is enabled.

_ErrorMarker Set to 0 for valid CDRs and -1 for error CDRs.


_ErrorAttributes Contains a list of error NME attributes.
_ErrorAttributeContents Contains the data from the error NME attributes.
_ErrorAttributeMessage Contains text messages describing each error NME attribute.
_ErrorParseletInformation Lists the parselets used to parse the error NME attributes. This
enables the CDR editor to check the corrected data.

CAUTION: Do not modify these attributes.

NOTE: Presently only the RecordEncapsulator supports error detection.

NOTE: When the value of the EnableErrorParking attribute is false or not configured and a
collector encounters a parse error, the collector throws a ParserException. The collector can be
configured to continue processing by skipping that CDR or it can stop processing. The following
configuration attributes specify the action the collector takes after encountering a parse error:

CleanupOnParseError
DeferCleanupOnParseError
EndOnParseError
PurgeOnParseError
PauseOnParseError

See the RecordEncapsulator description in the eIUM Component Reference for complete details.

Page 64 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Detecting Validation Errors Using PostParsedRules


The parser checks each field value and detects and marks unparsable data. After parsing the CDR,
the encapsulator passes the resulting NME to the set of rules configured by the PostParsedRules
attribute. The encapsulator sends all NMEs including error NMEs to PostParsedRules. These rules
can check the NME for valid data.

PostParsedRules specifies one or more rules that execute after the NME is constructed and before
the NME is passed to the aggregator. You can use the NMEValidationRule within PostParsedRules to
detect validation errors. To use the NMEValidationRule, you specify the range of valid values each
NME attribute can take. Actually, you specify the range of invalid data for each NME attribute in the
NMEValidationRules ErrorCondition attribute. The NMEValidationRule compares each NME attribute
value to the specified values. However, once an error condition is met for a particular attribute of an
NME, no further rules are applied for that attribute. The NMEValidationRule does not attempt to
check the validity of fields with a parse error.

Whenever an invalid NME attribute value is detected, the NMEValidationRule marks the NME as an
error NME, updates the internal error attributes with specific information about the validation error,
and continues checking the next attribute. After all the rules in PostParsedRules are executed, the
encapsulator passes the NME to the aggregator.

To use the NMEValidationRule, you must set EnableErrorParking to true. This is because the
NMEValidationRule requires the internal error attributes added to the NME by setting
EnableErrorParking to true. Otherwise the NMEValidationRule will fail.

NOTE: The rules specified by PostParsedRules are always executed regardless of the value of
EnableErrorParking. Setting EnableErrorParking to true enables the NMEValidationRule to be
used.

For more information on the NMEValidationRule, see the eIUM Component Reference. For more
information on PostParsedRules, see the RecordEncapsulator in the eIUM Component Reference.

Example
For this example assume the value of SrcPort must be between 3250 and 4000, inclusive, and
DstPort must have a value greater than or equal to 3970. Below is an NMEValidationRule that
performs this validation. Notice the ErrorCondition attributes specify the range of values that make
the data invalid. That is, if the condition is true, the data is invalid.

[/deployment/host/collector/Encapsulator]
ClassName=RecordEncapsulator
EnableErrorParking=true
PostParsedRules=ValidatePorts

[/deployment/host/collector/Encapsulator/ValidatePorts]
ClassName= NMEValidationRule
ErrorCondition=SrcPort,<,3250
ErrorCondition=SrcPort,>,4000
ErrorCondition=DstPort,<,3970

Page 65 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Parking Error CDRs


In an eIUM collector, the encapsulator component sends NMEs to the aggregator. In the aggregator,
each NME is passed to each aggregation scheme serially. That is, the NME first goes to the first
scheme. The NME from that scheme goes to the second scheme. The NME from the second scheme
goes to the third scheme, and so forth. If any scheme modifies the NME, the modified NME goes to
the next scheme. Finally, each aggregation scheme can send each NME, or the aggregated NMEs if it
is performing aggregation, to the datastore. The datastore stores the NMEs in the format defined
by the datastore type.

When error handling is enabled by setting the EnableErrorParking attribute to true in the
encapsulator, you specify each aggregation scheme as either an error NME scheme or a valid NME
scheme. eIUM automatically adds a filter rule to each scheme so error schemes only process error
NMEs and valid schemes only process valid NMEs. The error scheme would typically use only a
StoreRule to store the error NMEs to a datastore for subsequent manual correction.

Error Scheme
Error NME schemes are those aggregation schemes that process error NMEs and store the error
NMEs in the datastore. You must specify all aggregation schemes in the SchemeNames attribute of
the Aggregator. You specify one or more aggregation schemes as error schemes by specifying them
in the ErrorSchemeNames attribute. Any aggregation scheme can be designated an error scheme by
configuring the scheme name in ErrorSchemeNames.

Syntax
ErrorSchemeNames=<scheme-name>

Where <scheme-name> is the name of an aggregation scheme. This name must also be configured
with the SchemeNames attribute of the Aggregator component.

Example
The following example shows two aggregation schemes configured in the aggregator, named
FirstScheme and SecondScheme. SecondScheme is designated as an error scheme. FirstScheme is a
valid NME scheme.

[/deployment/host/collector/Aggregator]
ClassName=Aggregator
ErrorSchemeNames=SecondScheme
SchemeNames=FirstScheme
SchemeNames=SecondScheme

[/deployment/host/collector/Aggregator/FirstScheme]
...

[/deployment/host/collector/Aggregator/SecondScheme]
...

Since the last rule of any aggregation scheme is typically a rule that persists NMEs, (for example
AggregationRule or StoreRule), for an error scheme the last rule should be a StoreRule, with all the
attributes of the NME configured. However, the internal error information attributes need not be
explicitly configured since the collector automatically stores the error attributes with the NME.

Page 66 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Output Format of Error CDRs


After processing NMEs in the aggregation scheme, when a flush is triggered the collector sends the
NMEs to the datastore to persist the NMEs. You can use the CDR Editor to read and modify these
error CDRs in the datastore.

To use the CDR Editor, the error scheme must be configured with a FileJDBCDatastore. The CDR
Editor can only read CDRs from a FileJDBCDatastore. It cannot read from any other type of
datastore.

How you configure the datastore for an error handling scheme depends on which datastore you use
for your valid NME schemes.

l If your valid NME schemes use FileJDBCDatastore, no change is required. Simply send the error
NMEs to the configured datastore.
l If your valid NME schemes use a MuxDatastore with a FileJDBCDatastore already beneath it, use
the FileJDBCDatastore for the error NMEs. Otherwise add a FileJDBCDatastore to your
MuxDatastore for the error NMEs.
l If your valid NME schemes use a datastore other than FileJDBCDatastore, add a MuxDatastore,
place your datastore beneath it and add a FileJDBCDatastore for the error NMEs.
For complete details on datastores and the MuxDatastore, see the eIUM Component Reference.

Configuration Summary
This section summarizes the configurations you need to enable error handling.

Encapsulator Configurations
l EnableErrorParking attribute: The value of the EnableErrorParking Boolean attribute
determines whether error handling is enabled or not. This attribute is configured at the
Encapsulator node. The default value is false.
/deployment/host/collector/Encapsulator
EnableErrorParking=True # Enables error handling

l NMEValidationRule: When configured under PostParsedRules, this rule checks each NME
attribute against a range of invalid values and marks each invalid NME attribute.
[/deployment/host/collector/Encapsulator/MyValidationRules]
ClassName=NMEValidationRule
ErrorCondition=SrcPort,<,3250
ErrorCondition=SrcPort,>,5000

Aggregator Configurations
ErrorSchemeNames attribute: This Aggregator attribute marks an aggregation scheme to be used
only for error NMEs. The value of ErrorSchemeNames must be also be named in the SchemeNames
attribute. Only error NMEs are processed by the error schemes.

[/deployment/host/collector/Aggregator]
ClassName=Aggregator
ErrorSchemeNames=Scheme1
SchemeNames=Scheme1
SchemeNames=Scheme2

Page 67 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Datastore Configurations
FileJDBCDatastore: Error NMEs must be stored in a FileJDBCDatastore if you intend to use the CDR
Editor.

Configuration Example
This configuration shows a collector that detects and parks error NMEs. The encapsulator checks if
the value of SrcPort is between 3250 and 4000. The aggregation scheme named Error parks error
NMEs in the datastore.

[/deployment/host/ErrorCollector]
ClassName=com.hp.siu.adminagent.procmgr.CollectorProcess
Description=Test collector for Error Parking

[/deployment/host/ErrorCollector/Aggregator]
ClassName=Aggregator
Description=Aggregator consists of a single Usage aggregation scheme
SchemeNames=Usage
SchemeNames=Error
ErrorSchemeNames=Error

[/deployment/host/ErrorCollector/Aggregator/Error]
ClassName=AggregationScheme
Description=Stores error NMEs in the datastore for manual correction.
RuleNames=ErrorRule

[/deployment/host/ErrorCollector/Aggregator/Error/ErrorRule]
ClassName=StoreRule
Description=Error NME data
Attributes=SrcIP
Attributes=DstIP
Attributes=SrcPort
Attributes=DstPort
Attributes=ProtocolName
Attributes=NumPackets
Attributes=StartTime
Attributes=EndTime

[/deployment/host/ErrorCollector/Aggregator/Usage]
ClassName=AggregationScheme
Description=Aggregates usage event records grouped by Source and
# Destination IP
RuleNames=GroupNMEs
RuleNames=Aggregate

[/deployment/host/ErrorCollector/Aggregator/Usage/GroupNMEs]
ClassName=HashMatchRule
Description=Groups event records by Source IP and Destination IP
MatchKeyNames=SrcIP
MatchKeyNames=DstIP

Page 68 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

[/deployment/host/ErrorCollector/Aggregator/Usage/Aggregate]
ClassName=AggregationRule
Description=Retains minimum StartTime and maximum EndTime and adds
usage attributes
Attributes=SrcIP
Attributes=DstIP
Attributes=SrcPort
Attributes=DstPort
Attributes=ProtocolName
Attributes=NumPackets,add
Attributes=StartTime,min
Attributes=EndTime,max

[/deployment/host/ErrorCollector/Datastore]
ClassName=FileJDBCDatastore
Database=jdbc:mysql://localhost:3306/siu20
DatabaseClass=com.mysql.jdbc.Driver
Description=Stores event records in binary format to a file
Password=siu20
TableAgeLimit=7d0h0m0s
TableRollLimit=1d0h0m0s
User=siu20

[/deployment/host/ErrorCollector/Encapsulator]
ClassName=RecordEncapsulator
Description=Processes event records from a file-based or record-based
# data source on the host machine
JitterTime=310
EnableErrorParking=true
PostParsedRules=ConditionSrcPort

[/deployment/host/ErrorCollector/Encapsulator/ConditionSrcPort]
ClassName=NMEValidationRule
ErrorCondition=SrcPort,<,3250
ErrorCondition=SrcPort,>,4000

[/deployment/host/ErrorCollector/Encapsulator/FlushPolicy]
ClassName=TimeFlushPolicy
ClockInterval=30m
Description=Flushes 30 worth of event records to the datastore

[/deployment/host/ErrorCollector/Encapsulator/RecordFactory]
ClassName=DelimiterRecordFactory
Delimiters=\r\n
Delimiters=\n

[/deployment/host/ErrorCollector/Encapsulator/RecordFactory
/StreamSource]
ClassName=FileSource
Description=A file based data source
TailingMode=none

Page 69 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

[/deployment/host/ErrorCollector/Encapsulator/RecordFactory
/StreamSource/FileRollPolicy]
ClassName=SequenceNumberFilePolicy
SrcDir=%VARROOT%/SampleData/DataFiles
BaseFileName=Data
InitialInputFile=Data1.log
Suffix=.log

[/deployment/host/ErrorCollector/Encapsulator/Parser]
ClassName=DelimiterParser
Description=Parses Netflow records which are delimited by a |
character
Attributes=SrcIP
Attributes=DstIP
Attributes=SrcPort
Attributes=DstPort
Attributes=ProtocolName
Attributes=NumPackets
Attributes=StartTime
Attributes=EndTime
FieldDelimiter=|
Trim=true
TimeStampFormat=MM/dd/yyyy HH:mm:ss

Correcting Error CDRs with the CDR Editor


You can use the CDR Editor to view the error NMEs stored in the datastore file, and edit them to
make corrections. The corrected NMEs can be saved back into the file and reprocessed by another
collector to feed the corrected NMEs back into the mediation stream.

Features available with the CDR Editor include:

l View NMEs in batches.


l Filter the view based on the value of an NME attribute.
l Hide one or more NME attributes from view.
l Highlight error NME attributes in color.
l Show the status of an NME, uncorrected or corrected.
l Bulk edit NMEs based on the values of one or more attributes.
l Save changes to the NME file.
See the eIUM Administrators Guide for complete details on the CDR Editor.

Reprocessing Corrected CDRs


This section describes how the corrected CDR files can be fed back into the processing stream. The
corrected CDRs must be in a FileJDBCDatastore. This datastore contains the CDRs in the form of
NMEs. Use an NMEFileEncapsulator in a collector to read the corrected NMEs back into the
mediation system.

Page 70 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Configuration Example
The configuration below shows a collector that reads NMEs written to a FileJDBCDatastore using an
NMEFileEncapsulator. This collector reads the files in the directory
%VARROOT%/SampleData/CorrectedFiles.%VARROOT% represents the var directory under the eIUM
installation directory.

[/deployment/host/ReprocessingCollector]
ClassName=com.hp.siu.adminagent.procmgr.CollectorProcess
Description=demo collector
ForcedStop=false

[/deployment/host/ReprocessingCollector/Encapsulator]
ClassName=NMEFileEncapsulator

[/deployment/host/ReprocessingCollector/Encapsulator/FileRollPolicy]
ClassName=DirectoryPolicy
BatchMode=false
DirectoryName=%VARROOT%/SampleData/CorrectedFiles

[/deployment/host/ReprocessingCollector/Aggregator]
ClassName=Aggregator
Description=Has two schemes to aggregate and store data.
SchemeNames=StoreData

[/deployment/host/ReprocessingCollector/Aggregator/StoreData]
ClassName=AggregationScheme
Description=Has a Store Rule to store NMEs without aggregation.
RuleNames=Store

[/deployment/host/ReprocessingCollector/Aggregator/StoreData/Store]
Attributes=SrcIP
Attributes=DstIP
Attributes=SrcPort
Attributes=DstPort
Attributes=ProtocolName
Attributes=NumPackets
Attributes=StartTime
Attributes=EndTime
ClassName=StoreRule

[/deployment/host/ReprocessingCollector/Datastore]
ClassName=FileJDBCDatastore
Database=jdbc:mysql://localhost:3306/siu20
DatabaseClass=com.mysql.jdbc.Driver
Description=Stores event records in binary format to a file
Password=siu20
TableAgeLimit=7d0h0m0s
TableRollLimit=0s
User=siu20

Page 71 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Processing Corrected CDRs


Once the error CDRs are manually corrected and read into a collector, they need to be processed
along with the valid CDRs. Since the corrected CDRs will most likely be reprocessed some time later
than when they were originally processed, you typically must use a DatasetMuxEncapsulator to
process the CDRs not in chronological order. The DatasetMuxEncapsulator processes CDR files that
are not in time order.

With CDR error handling enabled, you typically need three collectors instead of one when processing
data from a single data source. One collector processes valid CDRs and separates out the error
CDRs. Another collector reprocesses corrected CDRs. The third collector merges the data processed
by the other two collectors. Any downstream collector should query the third collector to ensure
that all the CDRs from the data source are consumed.

See "Solution Architecture" (on page 62) for a diagram showing all three collectors.

Parsers that Support Error Handling


The following parsers currently support error handling. See the eIUM Component Reference for a
complete list of all parsers and complete information on how to configure each parser.

l AMAMuxParser
l DelimiterParser
l DMS100OffsetParser
l EWSDParser
l FixedWidthAsciiParser
l NameValueParser
l NetlowParser
l NetflowProtocolParser
l OffsetParser
l OVPAParser
l PerlRegexParser
l RadiusParser
l RegexParser
l SMDParser
l TextOffsetParser
l TLVParsers
l TypedMuxParser
l WapTLVParser

Page 72 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 5: Detecting and Handling Errors in CDRs

Page 73 of 332 HP eIUM (8.0)


Chapter 6

Detecting and Handling Duplicate CDRs


eIUM collectors read input data from multiple data sources such as switches and other network
elements. Various factors, including data being collected from multiple sources within the same
network, can cause duplicate CDRs or other usage records to enter the processing chain.

You can configure a duplicate detection mechanism to identify and prevent duplicate CDRs from
entering the processing chain. This filtering mechanism consists of various duplicate detection rules
which can be used in different scenarios. This chapter describes and compares the eIUM duplicate
detection rules and helps you decide which rule will work best for your situation. For complete
details on these rules and all eIUM components, see the eIUM Component Reference.

Introduction 75

Accuracy of Duplicate Detection Rules 75

Range of NMEs to Examine for Duplicates 75

Design Questions 75

DuplicateNMEDetectorRule 76

Recovery 76

Aging Mechanism 76

DuplicateCDRDetectorRule 76

Recovery 77

Aging Mechanism 77

TimeHashDuplicateCDRDetectorRule 77

Recovery 78

Aging Mechanism 78

TimeIntervalDuplicateCDRDetectorRule 78

Recovery 79

Aging Mechanism 79

When to Use Each Rule 79

Performance Comparison 80

Performance Statistics 81

Disk Usage Statistics 81

Accuracy and Reliability 82

Page 74 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

Introduction
eIUM provides the following four duplicate detector rules. Each rule saves information about each
NME encountered such as hash keys or NME attribute values over a period of time and compares
the current NME with previous NMEs. When the information matches, the current NME is considered
a duplicate.

l DuplicateNMEDetectorRule - Compares a hash key of NME attributes.


l DuplicateCDRDetectorRule - Compares a hash key of NME attributes and can compare NME
attributes.
l TimeHashDuplicateCDRDetectorRule - Compares a hash key of NME attributes.
l TimeIntervalDuplicateCDRDetectorRule - Compares a hash key of NME attributes and can
compare NME attributes.
Each rule has advantages and disadvantages over the other rules. Knowing these characteristics will
help you choose the best rule for your situation. The basic trade-off is between speed and accuracy.
Higher accuracyno false duplicatestypically takes more time and resources. Lower accuracy
possibly indicating false duplicatestakes less time and fewer resources. Select one of the
following rules based on your performance requirements and your tolerance for false duplicates.

Accuracy of Duplicate Detection Rules


A duplicate detector rule is considered accurate when both the following conditions are true:

1. No duplicate NME passes through the rule undetected. That is, duplicates are always found.
2. No NME is falsely flagged as a duplicate. That is, false detection of duplicates never happens.
All four eIUM duplicate detection rules satisfy point 1 above. However, point 2 is true only for those
rules that compare both hash codes and NME attribute values. The less accurate rules have a
potential to falsely flag an NME as a duplicate because they compare only hash codes but not the
actual NME attribute values. This makes them only as accurate as the hashing algorithm they use
because of the possibility of hash collisions. See also "Accuracy and Reliability" (on page 82).

Range of NMEs to Examine for Duplicates


Each duplicate detection rule saves information about a group of recent NMEs and compares the
incoming NME with the saved information. You specify how much information to save. For example,
depending on the rule, you can specify a fixed number of NMEs or you can specify a time period. The
rule saves the specified number of NMEs or it saves the NMEs from the specified time period, say 24
hours, 1 week or 1 month. NMEs over the specified number or older than the specified time period
are aged out and deleted.

The aging period specifies the set from among which NMEs are considered for duplicate detection.
For example, configuring an age period of 24 hours indicates that the incoming NME will be
compared with all NMEs from the last 24 hours. If an NME arrives that is a duplicate of an NME that
arrived within this 24-hour window, it will be detected as a duplicate. If an NME arrives that is a
duplicate of an NME that arrived before this 24-hour window, it will not be detected as a duplicate.
You need to specify the period of time over which duplicates will be checked based on when you
predict duplicates can occur.

Design Questions
Below are some questions to ask yourself when deciding which rule to use:

Page 75 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

l How do you determine if an NME is a duplicate? That is, which NME attributes need to be
compared to detect a duplicate? If you can limit the number of NME attributes that need to be
compared, you can reduce resource usage and improve performance.
l Over what time period do you expect to encounter duplicates? To detect duplicates, each
incoming NME must be compared in some way with all previous NMEs from some time period, for
example an hour, a day or a week. If you can limit the amount of time over which this comparison
is performed, you can improve performance. See also the aging mechanism described for each
rule below.
l Can you tolerate false duplicates? Rules that use only a hash of NME attribute values can
potentially declare false positives. If you cannot tolerate false duplicates, use a rule that
compares actual NME attribute values.
l What are your performance requirements? Knowing these will help you test various rules to
determine which rule meets your performance requirements.
l If a system or collector goes down, do you need to save the information about past NMEs for
duplicate detection? See the recovery information characteristics for each rule below.

DuplicateNMEDetectorRule
The DuplicateNMEDetectorRule detects duplicates by computing a hash code for the key NME
attributes of each NME and checking for hash code collisions. These NME attributes are used to
generate a CompositeAttributeKey for each NME. These CompositeAttributeKeys are stored in a
hash map. Every new NMEs attributes are compared with the attributes in the hash map. This hash
map is allowed to grow to a maximum configured size of 2^30 keys. The larger the configured
cache, the larger the memory allocated to the JVM since the collector stores all the NMEs in
memory. Ideally the number of NMEs used to detect duplicates by this rule should be 1000 NMEs or
less.

This is a relatively quick and simple rule. The incoming NME is checked only with the last n NMEs
which are processed by the rule. Only the last n NMEs are cached and the remaining are aged from
memory and not considered for duplicate detection. This is most suitable in scenarios where the
number of NMEs is relatively small.

Recovery
During a flush, all the CompositeAttributeKeys are written to the recovery file. This is used as
recovery information when the collector is restarted. Since a lot of information is written into the
recovery file, the recovery files tend to be large.

Aging Mechanism
The rule has an aging mechanism whereby it removes the oldest entries from the hash map when
the number of entries becomes larger than the configured cache size.

DuplicateCDRDetectorRule
The DuplicateCDRDetectorRule detects duplicates by computing a hash code for the key NME
attributes of each NME and checking for hash code collisions. The rule also provides a configurable
attribute called CompareKeyAttributes which indicates that the key NME attributes are to be
compared when a hash code collision occurs. This makes the duplicate detection by this rule
accurate. The hash codes are 64-bit values computed from a combination of CRC32 and Adler32

Page 76 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

algorithms. Only these hash codes are stored in memory. The actual NME attributes are persisted in
the database during each flush. If the CompareKeyAttributes is set (which is required for accurate
duplicate detection), the attributes corresponding to the hash key must be queried from the
database and compared. Since the database must be queried for each hash code collision,
processing time of the rule can be high.

A percentage of the NMEs can also be stored in a cache in memory by configuring the
InMemoryPercentage attribute. This will ensure that a percentage of all NMEs will be held in memory
even after a flush so that the attribute comparison may not require a database query.

The DuplicateCDRDetectorRule can be used where accuracy has higher priority. Performance may
be slower because of the database querying which is required for comparing the attributes on a
hash code collision. An advantage of using this rule is that it is possible to configure whether to
detect duplicates simply by hash code collisions (which means there could be false duplicates
detected) or to do a comparison of the actual NME attribute values.

Recovery
Since the NMEs are stored in a database, no other recovery mechanism is necessary.

Aging Mechanism
The aging mechanism stores the hash keys in memory in a hash map until an aging limit, after which
they are removed from memory and no longer considered for duplicate detection. If aging is based
on time (as opposed to the number of keys), the maximum aging period is 24 hours.

TimeHashDuplicateCDRDetectorRule
The TimeHashDuplicateCDRDetectorRule detects duplicates based on hash code collisions. A 32-bit
hash code is computed for the configured key NME attributes of each NME. The efficiency of this
rule lies in the way the hash codes are stored in memory. The NMEs EndTime attribute is used to
compute the time interval to which the NME belongs and the hash codes are stored in memory,
structured by time intervals. Up to a maximum of 15 intervals can be stored in memory at a time.
The advantage of the interval-based caching is that when an NME enters the rule for duplicate
detection, its hash code is compared only with the hash codes stored for the interval to which it
belongs. The TimeHashDuplicateCDRDetectorRule has a very good processing time and is much
faster than the other duplicate detector rules in processing large amounts of input data.

Since only 15 intervals are stored in memory at a time, the remaining intervals are persisted in key
store files. One key store file is created per interval. If an NME which belongs to an interval not
currently in memory enters the rule, the interval (if present on disk) is paged into memory after one
of the intervals in the memory is paged out to disk. This paging makes the disk and memory usage
more efficient.

This rule provides very quick detection of duplicates. The computed hash codes of NMEs are divided
according to the intervals to which they belong and stored per time interval. Only the time interval
to which the NME belongs needs to be checked for duplicate detection. Its disk and memory usage is
also optimal since only hash codes (computed from the key attributes) are stored. But the
disadvantage is that because it detects duplicates based only on hash code collisions and there is no
comparison check with the actual NME attribute values, it can potentially detect false duplicates.
This rule is only as efficient as the hashing algorithm that it uses.

This disadvantage of the TimeHashDuplicateCDRDetectorRule is overcome in the


TimeIntervalDuplicateCDRDetectorRule. The TimeIntervalDuplicateCDRDetectorRule compares the

Page 77 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

actual NME attribute values. A deployment requiring accurate duplicate detection where false
duplicates cannot be tolerated should use the TimeIntervalDuplicateCDRDetectorRule, though the
time and memory requirements are slightly higher. For more details see the comparison table
below.

Another disadvantage of this rule is that it does not support roll-back on a purge after an interim
flush. An interim flush happens when there exists the maximum number of intervals (15) in memory
and none of these intervals is persisted. If an NME belonging to a new interval enters the rule, one of
the existing intervals in memory must be written out to make room for the new interval to be stored
in memory. The intervals are persisted without waiting for a collector flush. This is called an interim
flush. Since the intervals are persisted even before a collector flush, a purge of that file cannot roll
back the records which are already written out.

Recovery
All the intervals are written out to disk when the collector stops and recovery is done by loading
these interval files to memory on a collector restart.

Aging Mechanism
The aging mechanism in the rule involves deleting the key store files of the intervals which are older
than the aging period.

TimeIntervalDuplicateCDRDetectorRule
The TimeIntervalDuplicateCDRDetectorRule is an enhanced version of the
TimeHashDuplicateCDRDetectorRule. This rule generates a CompositeAttributeKey from the
configured key NME attributes of each NME, and compares this with the existing
CompositeAttributeKeys on a hash code collision. Since the actual NME attribute values are
compared in this rule, it makes the duplicate detection accurate.

This rule is very efficient because the NMEs are stored in memory on the basis of time intervals. The
NMEs EndTime is used to compute the time interval to which the NME belongs and the
CompositeAttributeKeys are stored in memory based on the time intervals. Up to a maximum of 15
intervals can be stored in memory at a time. The advantage of interval-based caching is that when
an NME enters the rule for duplicate detection, its hash code needs to be compared only with the
hash codes stored for the interval to which it belongs. But since the
TimeIntervalDuplicateCDRDetectorRule compares the NME attribute values as well, it is not as fast
as the TimeHashDuplicateCDRDetectorRule. Also, the CompositeAttributeKeys for each NME are
stored in memory and on disk, so the memory consumption and disk usage of the
TimeIntervalDuplicateCDRDetectorRule is more than the TimeHashDuplicateCDRDetectorRule.
Ideally, the interval length (which specifies the length of each interval stored in memory) should be
low (such as 30 minutes or 1 hour). This will ensure that the maximum hours of data stored in
memory at a time is 15 times the interval length.

Like the TimeHashDuplicateCDRDetectorRule, this rule uses a 32-bit hash code generated by a
CRC32 algorithm (either from a Java library or from an eIUM library). When a new NME comes into
the rule, it is compared only to the NMEs in the same time interval, so the comparison logic is much
faster and performance is better.

The biggest advantage of this rule over the TimeHashDuplicateCDRDetectorRule is that its duplicate
detection mechanism is accurate. Another advantage is that it allows roll-back on a purge after an
interim flush. An interim flush is when the maximum number of intervals (15) are in memory but
none of these intervals has been persisted. If an NME belonging to a new interval enters the rule,

Page 78 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

one of the existing intervals in memory is written out to make room for the new interval to be
stored in memory. The intervals are persisted without waiting for a collector flush. This is called an
interim flush. But in this rule, the interim flush writes out temporary files rather than persisting the
intervals in permanent key store files. The temporary files are converted to permanent key store
files only on a collector flush. A purge on a file can roll back the records that have been persisted by
the interim flush.

This rule detects duplicates based on structured time intervals. It compares the hash codes of the
NME to the stored NMEs. In case of a collision, it also compares the key NME attribute values. It
provides accurate duplicate detection, but since the NME attributes have to be compared, they also
have to be stored. The memory and disk usage and the time taken for the rule to process an NME
are much more than the TimeHashDuplicateCDRDetectorRule.

Recovery
All the intervals are written out to disk when the collector stops and recovery is done by loading
these interval files to memory on a collector restart.

Aging Mechanism
The aging mechanism in the rule involves deleting of key store files of intervals older than the aging
period.

When to Use Each Rule


l The DuplicateNMEDetectorRule should ideally be used when it can be assumed that duplicates
will occur only in the last n NMEs. Ideally, the value of n should not be very high, the maximum
value allowed is 2^30. The disadvantage of using the DuplicateNMEDetectorRule for large
number of NMEs is that, since all the NMEs are stored in memory for duplicate detection, the
larger the configured cache size, the larger the memory required. Use this rule only when the
number of NMEs with which an NME needs to be compared for duplicate detection is not very
large.
l The DuplicateCDRDetectorRule can be used in scenarios where accuracy is of much higher
priority than speed. Processing speed of this rule is slower because a database query is executed
for each comparison on a hash code collision. But if a database is suitable, then the
DuplicateCDRDetectorRule can be used. All the recovery information for the state of the rule is
stored in the database itself.
l The TimeHashDuplicateCDRDetectorRule should be used in cases when a very fast and efficient
duplicate detection rule is required to detect duplicates among large amounts of input data. But
since the rule can potentially detect false duplicates, it cannot be used in cases where high
accuracy and reliability are required. The rule has efficient memory and disk usage, but you must
be able to tolerate occasional false duplicates.
l The TimeIntervalDuplicateCDRDetectorRule detects duplicates very effectively and correctly.
Since the NME attributes are also used in the duplicate detection, they need to be persisted. The
memory consumption is greater and processing time is longer than that of the
TimeHashDuplicateCDRDetectorRule. It is most accurate for detecting duplicates among large
amounts of input data.
The following table shows the key characteristics of each duplicate detection rule.

Page 79 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

Key Characteristics of the Duplicate Detection Rules


Resource Usage
Rule Best Scenario Speed Accuracy (Memory and Disk)

Duplicate Best used when Fast processing time Accurate for Depends on the
NMEDetectorRule the number of checking configured cache size,
NMEs for low since all the NMEs are
comparison is less numbers of stored in memory for
than 1000. NMEs duplicate detection.
The larger the
configured cache size,
the larger the memory
required.

Duplicate Best used in Slower Accurate if Speed is slower due to


CDRDetectorRule scenarios where CompareKey the necessity of
accuracy is more Attributes is querying the database
important than set to true. to detect duplicates.
speed. This rule uses a
database so disk and
memory usage are
higher.

TimeHash Since duplicate Fastest in processing large Only as Efficient (optimal)


DuplicateCDRDetectorRule detection is amounts of data. accurate usage.
required from and
large amounts of efficient as
data, the chance the
of false configured
duplicates exists. hashing
Can be used in algorithm.
scenarios where Chance of
false duplicates false
are tolerable. duplicates
exists.

TimeInterval Best used when Not as fast as TimeHash Very Memory requirements
DuplicateCDRDetectorRule duplicate DuplicateCDRDetectorRule Accurate are more than
detection is because the value of each TimeHashDuplicateCDR
required from a key NME attribute must be DetectorRule because
large number of compared and stored for the key NME attribute
NMEs. accurate duplicate values are stored to
detection. detect duplicates
correctly.

Performance Comparison
This section describes the results of performance tests using the duplicate detection rules. Use this
information to help you decide which rule will work best for your situation. The tests were
performed on IUM version 5.0 Feature Pack 1 with Java version 1.5. The collector was run on a
machine with the following hardware configuration:

Page 80 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

l Operating system: HP-UX 11.11


l CPU: 4x750 MHz
l Memory: 4 GB
l Disk Size: 4x18 GB
The profiling of the rules was done by allowing the four rules to process a sample of three days
worth of data containing a small percentage of duplicates. The TimeHashDuplicateCDRDetectorRule
and the TimeIntervalDuplicateCDRDetectorRule were configured to detect duplicates across the
entire sample data. But since the DuplicateNMEDetectorRule is designed to detect duplicates from
among a very small number of NMEs, its CacheSize was configured to 1000. The rule was used to
detect duplicates only with respect to the last 1000 NMEs that entered the rule for processing. The
DuplicateCDRDetectorRule was configured to detect duplicates from the last 24 hours of NMEs
(since the maximum aging period is 24 hours in this rule). The same hashing algorithm was
configured for all the rules. The TimeHashDuplicateCDRDetectorRule and the
TimeIntervalDuplicateCDRDetectorRule processed a total of 17,385,701 NMEs.

Performance Statistics
Performance is the time taken by the rules to process the sample data. It is an average of the
aggregation rates to process each of the datasets.

Time to Process the Sample Data


Rule NMEs per Second

DuplicateNMEDetectorRule, with CacheSize = 1000 3568

DuplicateCDRDetectorRule 1288

TimeHashDuplicateCDRDetectorRule 1771

TimeIntervalDuplicateCDRDetectorRule 1423

Disk Usage Statistics


The following table shows the disk space used by each of the rules.

Disk Usage
Rule Bytes Used

DuplicateNMEDetectorRule 30,198,827 - The size of the recovery file when the


CacheSize for this rule is set to 1,000,000.

DuplicateCDRDetectorRule 2,097,152,000 - The size of the database.

TimeHashDuplicateCDRDetectorRule 170,786,491 - The total size of the key store files.

TimeIntervalDuplicateCDRDetectorRule 69,275,057 - The total size of the key store files.

The factors that contribute to the usage of disk space in each rule are:

Page 81 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

l DuplicateCDRDetectorRule - All the NME attributes are stored in the database.


l DuplicateNMEDetectorRule - All the NME attributes are stored for recovery in the recovery file in
the form of strings.
l TimeHashDuplicateCDRDetectorRule - The hash codes are stored in key interval store files for
recovery.
l TimeIntervalDuplicateCDRDetectorRule - The NME attributes are stored in key store files for
recovery.

Accuracy and Reliability


All four eIUM duplicate detection rules detect all duplicates. However, the
TimeHashDuplicateCDRDetectorRule can potentially indicate that certain records are duplicates
even if they are not. This is also true for the DuplicateCDRDetectorRule if the CompareKeyAttributes
attribute in the rule is set to false. In these two rules, only hash code collisions are used to detect
duplicates. The reliability of these two rules is only as high as the reliability of the hashing algorithm.

If the DuplicateCDRDetectorRule is used with the CompareKeyAttributes attribute set to true, then
it has much higher accuracy. The DuplicateNMEDetectorRule is also accurate, but it can be used only
to check for duplicates among a relatively small number of NMEs. Finally, the
TimeIntervalDuplicateCDRDetectorRule also has a much higher accuracy because it compares the
NME attribute values when a hash collision occurs. This is summarized in the following table.

Degree of Accuracy
Rule Less Accurate More Accurate

DuplicateNMEDetectorRule **

DuplicateCDRDetectorRule:

-with CompareKeyAttributes configured **

-without CompareKeyAttributes configured **

TimeHashDuplicateCDRDetectorRule **

TimeIntervalDuplicateCDRDetectorRule **

Page 82 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 6: Detecting and Handling Duplicate CDRs

Page 83 of 332 HP eIUM (8.0)


Chapter 7

Deploying the File Service


A large voice network typically has many voice switches. Each switch generates files of usage data in
the form of call detail records or CDRs. The HP Internet Usage Manager (eIUM) File Service reads
multiple CDR files from multiple devices, typically voice switches but it can be other file sources as
well, and sends the CDR files to one or more collectors, thus averting the need to have the same
number of collectors as switches. The File Service can also read files from any other device or type
of network that generates files containing usage data. This document describes the file service and
how to configure it.

Multiple Switches Generating CDR Files 85

Benefits of the File Service 87

Conceptual Structure of the File Service 87

The File Service Components 87

File Collection Service Components 88

File Distribution Service Components 89

Collectors that Read CDR Files from a File Service 90

The Notification Table 90

CDR File Ownership and Cleanup 91

Recovery 91

Configuration Structure of the File Service 92

File Collection Service Components 92

File Distribution Service Components 94

Collector Components 96

Designing and Deploying a File Service 96

Determine Input Data Sources 96

Determine the Number of Output Collectors 97

Configure Collectors 97

Test, Measure and Adjust 97

Creating a New File Service with the File Service Wizard 97

Examine and Modify Your File Service 102

Verifying File Service Components 103

Administering the File Service 103

Using the Launchpad 103

Page 84 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Viewing File Service Statistics in the Launchpad 103

Viewing File Service Statistics in the Operations Console 104

Using File Service Commands 104

Multiple Switches Generating CDR Files


The file service is useful when you have multiple switches generating CDR files and you do not want
the same number of collectors. That is, you do not want one collector per switch. Using one collector
per switch is a viable deployment, as the diagram shows below, however depending on the volume of
CDRs your switches generate, fewer collectors may be a better solution for processing the CDRs.
Voice Switch
Voice CDR
Leaf Collectors
Switches Files

The above figure shows four voice switches each generating CDR files stored locally on the switch.
Four leaf collectors are deployed to collect CDRs from the switches.

Another viable deployment is to use one collector with a DatasetMuxEncapsulator. The


DatasetMuxEncapsulator contains multiple individual encapsulators, one for each voice switch. The
limitations of the DatasetMuxEncapsulator are:

l Only one file can be read at a time.


l Only one file can be processed at a time.
l The collector polls each switch one at a time, and blocks if the switch does not have a file ready.
l Any changes to the collector require the entire collector to be stopped.
The next figure below shows four voice switches and one collector. The file service reads all the CDR
files from all the switches and passes them to a single leaf collector.

Page 85 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Voice CDR
Switches Files

Voice Switch
Leaf Collector

File
Service

The following figure below shows how a file service can read CDR files from multiple different
switches and direct the CDR files to specific collectors based on the switch type. For example, if you
have two switches from vendor A and two switches from vendor B, the file service can direct the CDR
files from vendor As switches to one collector and the files from vendor Bs switches to another
collector.

Voice CDR
Switches Files
Vendor A

Voice Switch
Leaf Collectors
Vendor A

File
Consolidator
Service
Vendor B

Vendor B

Page 86 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Benefits of the File Service


While using one collector per switch and using the DatasetMuxEncapsulator are viable
configurations, they do have limitations. The file service overcomes these limitations.

l Lets you make optimal use of hardware (CPU and memory) and software by reducing the number
of collectors required to process your data.
l The file service separates file collection from file processing. If the voice network goes down, the
collector continues to process files in the file service. If the file service goes down, when it comes
back up it continues collecting and processing files where it left off. If the collector goes down,
the file service continues collecting files from the voice switches and continues distributing the
files to the collector when it comes back up.
l You can reconfigure file collection and file processing separately when, for example, you upgrade
your voice switches.
l You can direct files from specific switches to specific collectors.
l With upcoming scheduling policies, youll be able to configure load balancing by directing CDR
files to less busy collectors.

Conceptual Structure of the File Service


The file service reads CDR files from multiple voice switches and sends the CDR files to one or more
collectors. The file service consists of two separate but cooperating components:

l File Collection Service reads CDR files from multiple switches.


l File Distribution Service sends the CDR files to one or more collectors.

File
Service

File File
Collection Distribution
Service Service

The following sections describe the component structure of the file collection service and the file
distribution service.

The File Service Components


The following diagram shows all the file service components including four input data sources and a
single output collector. The following sections describe each component in detail.

NOTE: The file collection service, the file distribution service and all leaf collectors reading files
from the file distribution service may be running on the same or on different eIUM host systems
in a deployment.

Page 87 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

File Collection Service Components


The file collection service contains the following components:

l A FileCollectionServiceProcess is the top-level container component.


l A FileServiceDatastore stores only recovery information for the file collection service. This is
only used for recovery information when the file collection service restarts.
l A JobNotifier notifies the file distribution service when CDR files have arrived and are ready to be
distributed to a collector. The JobNotifier names a database table where it places information
about each CDR file. The file distribution service uses this table to obtain information about the
input CDR files to pass each file to the appropriate collector.
l One FileCollector per input data source reads CDR files. The FileCollector is similar in some ways
to an encapsulator. Each FileCollector reads CDR files from one voice switch.

Page 88 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

File Collector Components


Each file collection service contains one FileCollector component for each voice switch input data
source. The FileCollector is similar to a collectors encapsulator in that it reads input data from some
external data source. In this case the external data source is a voice switch so the FileCollector
needs to specify how to access the external data source. It does so by specifying a StreamSource.
The StreamSource can be, for example, an FTPSource component described in the eIUM Component
Reference. The FTPSource specifies, among other things the FTP server name, a user name and a
password for logging into the FTP server.

NOTE: The StreamSource must be a file-based source, such as FTPSource or FileSource. You can
read CDR files from voice switches that use the FTAM protocol with the RemoteCommandSource
and HPUXFTAMCommandSourceDriver components. Other kinds of sources such as TCPSource
and UDPSource are not supported.

The StreamSource typically needs a FileRollPolicy configured under it to specify how the files are
named and where they are located. For example, a DirectoryPolicy specifies a directory path where
input files are located.

See "Configuration Structure of the File Service" (on page 92) for details on the configuration of
these components.

File Distribution Service Components


The file distribution service receives notifications of CDR files and passes the files to configured
collectors. The file distribution service consists of the following components:

l A FileDistributionServiceProcess is the top-level container component.


l A Job Manager periodically queries the common database notification table, shared by the file
collection and distribution services, for CDR files ready for distribution. The JobManager must be
configured with the same database table name specified in the JobNotifier. This table contains
information about the CDR files to be passed to collectors.
l A Collector Manager contains Collector Resources and maps data sources to collectors. It
specifies which data sources CDR files are to be distributed to each collector.
l Collector Resources notify ordinary eIUM leaf collectors when CDR files are available to be
processed. You configure one Collector Resource for each leaf collector that is to process the
CDR files.

Page 89 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Collectors that Read CDR Files from a File Service


The collectors that read CDR files from a file service are ordinary eIUM collectors. The only unique
aspect of this type of collector is that it must use a JobReceiverFileRollPolicy component within its
encapsulator, which must be a RecordEncapsulator. (The RecordEncapsulator is the only
encapsulator that uses a file roll policy.) Otherwise it can use any other encapsulator
subcomponents, an aggregator (or rule engine) and a datastore like any other collector.

See "Configuration Structure of the File Service" (on page 92) for a description of where the
JobReceiverFileRollPolicy fits in a collector.

The Notification Table


The file collection service, file distribution service and the collectors use the same database
notification table to persist and share information. The file collection service's JobNotifier
component, the file distribution service's JobManager component and the collector's
JobReceiverFileRollPolicy component must share the same database configuration. All three
components must name the same table in their TableName configuration attribute. The notification
table is also used to track process-related file ownership.

Page 90 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

CDR File Ownership and Cleanup


Each CDR file read by the file service is carefully tracked as it passes through the file service. When
the file collection service fetches a file, it writes information about the file in the notification table.
At this point, the file collection service owns the file.

Next the file collection service passes the file to the file distribution service, which becomes the new
owner.

Finally, the file distribution service passes the file to one particular collector and the collector
becomes the new owner. As with all eIUM collectors, the collector is responsible for the cleanup of
the CDR file after it has processed the file. See the cleanup handlers in the eIUM Component
Reference for available cleanup handlers.

After the collector has finished processing the file, it removes the information about the file from
the notification table. The file distribution service does not clean up or remove the file.

Recovery
The file service implements recovery similar to other eIUM components.

If the collector stops in the middle of processing a file, when it restarts it will reprocess that file
since it had not flushed any data from that file. The file collection service continues fetching files
and sending them to the file distribution service. The file distribution service queues the files until
the collector restarts.

If the file collection service stops, it uses its recovery information to refetch any files it was in the
middle of fetching. Then it continues fetching files as configured. The file distribution service
continues sending files it has to the collectors.

If the file distribution service stops, the file collection service continues fetching files and just holds
them. The collectors cannot process any more files until the file distribution service restarts.

Page 91 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Configuration Structure of the File Service


This section shows the configuration structure for a file collection service, a file distribution service
and a collector reading from a file distribution service. It shows the configuration node names and
the components configured at each node. For complete details on each component and its
configuration attributes, see the eIUM Component Reference.

NOTE: This section describes most but not all the file service components. See the eIUM
Component Reference for complete details on all components. See also the eIUM Template
Reference and the file service templates for complete, working examples of these components.

File Collection Service Components


The following diagram describes the specific components in a file collection service, where they are
configured and some of the key configuration attribute. The figure shows a file collection service
with only one File Collector. You would more likely configure multiple File Collectors under the file
collection service, one for each input data source or voice switch.

The figure below shows a file collection service with two File Collectors.

Page 92 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

File Collection Service with a Preprocessor


The following diagram shows a file collection service with a Preprocessor. The Preprocessor is a
subcomponent of the File Collector component and it operates on each file read by the File Collector.
One function of a preprocessor could be to mark each input file with the name of the originating
voice switch. For example, the preprocessor might rename each input CDR file to include a string
that identifies the voice switch that generated the file. Another example would be for the
preprocessor to add a sequence number to each CDR file name.

Another example is if each switch reuses the same file rather than generating new files. For
example, a switch might writes CDRs to the file CDRFILE. When the mediation system reads the file,
the switch simply reuses the same file. When the file service reads the file, it could add a sequence
number to the file name to make it unique.

The example below shows a File Collector with a RenameFilePreprocessor that can prepend or
append a string on each file name, or provide a completely new file name to each input file. The

Page 93 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

RenameFilePreprocessor contains a SequenceNumberFilePolicy that generates a new sequence


number to be used by the RenameFilePreprocessor.

The StreamSource is an FTPSource, indicating the file collection service fetches CDR files from the
switch via FTP. The FTPSource uses a DirectoryPolicy, indicating it fetches all files from the specified
directory on the voice switch, after possibly applying file name filters.

File Distribution Service Components


The following diagram describes the specific components in a file distribution service, where they
are configured and some of the key configuration attributes. Notice that the JobManager specifies
the same notification table name as specified in the JobNotifier of the file collection service. Notice
also that the CollectorManager names the input sources specified in the file collection service and
associates them with a Collector Resource. The Collector Resource represents an actual collector
that will process the CDR files.

Page 94 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

If you had more than one collector, the file distribution service would configure multiple Collector
Resource nodes under the Collector Manager as shown below.

You can also specify that files from a switch go to any of a group of collectors. The following shows a
Collector Manager that sends files from switches 03, 04 and 05 to either collector CollC or CollD,
whichever is available next.

Page 95 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Collector Components
The following diagram describes the specific encapsulator components in a collector that reads CDR
files from a file distribution service. The rest of the collector, the aggregator and datastore, are no
different from any other collector. Notice that the JobReceiverFileRollPolicy specifies the same
notification table name as specified in the JobManager of the file distribution service. For complete
details on the JobReceiverFileRollPolicy, see the eIUM Component Reference.

Designing and Deploying a File Service


This section presents some considerations when designing and deploying a file service. See also
"Creating a New File Service with the File Service Wizard" (on page 97) for information on how to use
the Launchpad and eIUM templates to create your file service.

Determine Input Data Sources


The first step is to determine how many input devices you have and what type they are. If you have
voice switches, how many voice switches? Are the voice switches identically configured except for
their IP address?

Configure one File Collector for each switch in your file collection service.

Do you need to identify which switch each file came from? If so, use a preprocessor to mark each
file. For example, the preprocessor could rename each file to include a string indicating the origin
switch.

How frequently does the file collection service need to fetch CDR files from each switch? To answer
this question, for each of your switches determine the following:

l How many CDR files does the switch generate?


l How frequently are files generated?
l How large are the CDR files?
Configure the frequency of fetching files from the switches using the QueryInterval attribute in the
file collection service component.

Page 96 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Determine the Number of Output Collectors


Determine how many collectors you will use to read the CDR files. This depends on the volume of
data coming from the file collection service.

Configure one Collector Resource for each collector.

Determine how to partition CDR files among collectors. For example, if you have ten voice switches
and two collectors, you might send CDR files from switches one through five to the first collector
and files from switches six through ten to the second collector.

Configure the mapping of CDR files to collectors using the SourceResourceMapping attribute of the
ResourceManager component.

Configure Collectors
Finally, configure the collectors that will ultimately process the CDR files from the file distribution
service. Use the JobReceiverFileRollPolicy in the collectors encapsulator.

Test, Measure and Adjust


Test your file service with realistic input data and adjust the number of components, the query
intervals, and the mapping of CDR files to collectors to maximize performance and allow for growth
in volume of CDR files.

Creating a New File Service with the File Service Wizard


Create a new file service with the Launchpad File Service Wizard as follows. See also "Designing and
Deploying a File Service" (on page 96).

1. In the Launchpad, select the Tools -> Deployment Builders -> Create File Service Components
menu item. This displays the File Service Wizard screen as shown below.

Page 97 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

Enter the following information in this screen.


File Service Wizard Screen 1
Input Field Description

File Service Select the eIUM host where the file service is to run.
Host

File Service Provide a name for the file service.


Name

FTP Select FTP if the file collection service is to fetch files using FTP.

Stream Select Stream if the file collection service is to fetch files using any method
other than FTP, such as local files.

How Many Enter the number of input devices the file collection service is to fetch files
Sources? from. For example, if you have 25 voice switches, enter 25.

How Many Enter the number of eIUM collectors that will process the input files. (On a later
Collectors? screen you will map input sources to collectors.)

2. Click the Next button. If you selected FTP as the file transfer mechanism, this displays the
following screen. If you selected Stream as the file transfer mechanism, skip to step 3.

Page 98 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

NOTE: In this step, you will configure one FTP server. All the input FileCollectors will be
configured with the same FTP server information. Later you will need to edit each
FileCollector to refer to the proper FTP server.

Enter the following information and then skip step 3 and go to step 4.
File Service Wizard Screen 1.1, FTP Server Setup
Input Field Description

FTP Server Enter the name of the FTP server where files will be retrieved
from.

User Name Enter the user name to authenticate to the FTP server.

Password Enter the password of the user of the FTP server.

Remote Source Enter the directory where files are located on the remote device.
Directory

Local Cache Directory Enter the directory on the local host where the files are to be
placed.

3. If you selected Stream as the method of reading input files, enter the directory where the files
will be located. If you are using some other method of retrieving files, enter any text that will
help you find this location in the configuration for easy editing later.

Page 99 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

4. Click the Next button. This displays the File Service Collector screen as shown below.

Specify the information described below in the File Service Collector screen.

Page 100 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

File Service Wizard Screen 2, File Service Collector


Input Field Description

Collector Enter the name to be used for all collectors that are to read and process files
Base Name from the file distribution service. The wizard will create the number of
collectors you specified in the first screen where you specified how many
collectors you want. It appends a number on the name you specify here to
distinguish each of the collectors and give them unique names.

File Enter the name of the file collection service to be created.


Collection
Name

File Enter the name of the file distribution service to be created.


Distribution
Name

Notification Enter the name of the database table to be used for notifications between the
Table file collection service, file distribution service and collectors. For more
Name information see "The Notification Table" (on page 90).

5. Click the Next button to display the Update Configuration screen as shown below.

Verify the information you entered. Use the Previous button to go back and correct anything.
6. Click the Next button to display the final screen.

Page 101 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

7. Click the Finish button to exit the wizard. Continue with "Examine and Modify Your File Service"
(on page 102).

Examine and Modify Your File Service


After creating a file service with the file service wizard, you must modify the configuration as
described below. In addition, examine all the file service configurations to verify they reflect your
needs.

NOTE: You can edit your file service any time by double clicking on it in the Launchpad, or by right-
clicking on it and selecting the Edit menu item, or by selecting it and selecting the Actions -> Edit
menu item.

You can also use the Deployment Editor to modify the file service.

1. Open the file collection service. Open each input data source and modify them as needed. For
example, you will most likely have to modify the RemoteFileSource and the FileRollPolicy.
2. Open each collector created by the file service wizard. You will most likely need to modify the
encapsulator and parser. The default aggregator simply has a StoreRule, so add your business
logic to the aggregator.
3. Open the file distribution service. Use the source to collector mapping screen to assign the files
from each of the Available Sources to one or more Destination Collectors. Each available input
file source can be mapped to one or more collectors. The Selected Sources box shows which
Available Sources are mapped to the displayed Destination Collector.

Page 102 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

The assigning of files to collectors is governed by the JobSelectorPolicyFactory and the


ResourceSelectorPolicyFactory of the CollectorManager component. See the CollectorManager
component in the eIUM Component Reference for more information.

Verifying File Service Components


Once you have created a file service, test it by starting the file collection service and examining its
statistics. In the Launchpad, right-click on the file collection service then select the statistics menu
item to display the statistics for the file collection service. Examine the statistics to make sure files
are being consumed and passed to the file distribution service.

Check the statistics in a similar way for the file distribution service to make sure it is sending files to
the appropriate collectors. Check the collectors to make sure they are receiving files from the file
distribution service and processing them correctly.

Administering the File Service


You can use the Launchpad or commands to operate the file collection service and file distribution
service in your deployment. This section describes these methods.

Using the Launchpad


Use the Launchpad to create, start, stop, clean up and delete the file services and edit the
configurations of the file service components. You can also view file service statistics, component
log files, set the log level, set the run-time properties, and save your customized file collection
service and file distribution service as templates.

In the Launchpad, select the file collection service or file distribution service, then right-click to bring
up the menu. Or select the file collection service or file distribution service, then select the
appropriate Launchpad menu item.

Viewing File Service Statistics in the Launchpad


Just like collectors, the file collection service and file distribution service publish statistics
information while they are running. This statistics information shows status since the last startup.
Once the server is restarted, all statistics information that has been collected up to that point is
reset.

File Collection Service Statistics


The file collection service typically collects files from numerous data sources, as specified in its
configuration. Associated with each data source is a file collector that collects files from that data
source. The file collection service maintains and publishes information about the file collection
service and about each individual file collector. To view the statistics information published by the
file collection service, right click on the file collection service and select the View Statistics menu
item. This displays the following information:

l Files Collected - The total number of files collected by the file collection service since startup.
l The following statistics for each individual input data source:
n Data Source - The name of each input data source from the file collection service

configuration.

Page 103 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

n Files Collected - The number of files collected from each input data source since file collection
service startup.
n Last File Name - The name of the last file collected from each input data source.
n Last File Time - The time the last file was collected from each data source.

NOTE: If a data source is deleted after the file collection service is started, all statistics regarding
that data source continue to display until the file collection service is restarted.

File Distribution Service Statistics


The File Distribution Service distributes files from each data source to collectors as specified in the
file distribution service's configuration. The file distribution service maintains and publishes
information about the file distribution service and about each data source. To view the statistics
information published by the file distribution service, right click on the file distribution service and
select the View Statistics menu item. This displays the following information.

l Files Distributed - The total number of files distributed by the file distribution service since
startup.
l The following statistics about each file distributed from each input source to each output
collector.
n Datasource - The name of each input data source from the configuration.

n Queued - The total files from each source that are ready for distribution to the next available
collector.
n Distributed - The total files distributed from each source to all configured resources.
n Collector Name - The number of files distributed from each input source to each collector.

NOTE: After the file distribution service is started, if you change the source resource mapping or
if you delete any of the sources or resources, statistics pertaining to these are not reset until the
server is restarted.

Viewing File Service Statistics in the Operations Console


As with the Launchpad, you can also use the eIUM Operations Console to view file service statistics.
Namely, statistics for any running file collection service or file distribution service can be viewed on
the General tab for the process. For more information, see the Operations Console User Guide.

Using File Service Commands


This section describes the file service commands. See also the eIUM Command Reference.

l Use the jcscontrol command to start, stop, delete and get the status of a file collection service
or to clean up an individual file collector.
For example, the following configuration shows part of a file collection service named SampleFCS
that defines two file collectors, named FileCollector1 and FileCollector2 that correspond to the
two input sources named host1 and host2:
[/deployment/host/SampleFCS]
ClassName=FileCollectionServiceProcess

Page 104 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 7: Deploying the File Service

JobCollectors=host1,FileCollector1
JobCollectors=host2,FileCollector2

The following example will attempt to start the file collector named FileCollector2 in the file
collection service named SampleFCS:
jcscontrol -n SampleFCS -s host2 -c start

l Use the jcscleanup command to clean up a file collection service, similar to how the siucleanup
command cleans up a collector.
The following example cleans up a file collection service named FCS1:
jcscleanup -n FCS1

l Use the jdscontrol command to dynamically change how files are distributed by a file distribution
service. With the jdscontrol command you can assign collectors to process files collected from a
source, stop collectors from processing files from a source, add new data sources and remove
existing data sources. These changes become effective without restarting the file distribution
service.
For example, the following configuration shows part of a file distribution service named
SampleFDS that maps two sources named DataSource1 and DataSource2 to two resources
(collectors) named SampleFDSCollector1 and SampleFDSCollector2. Files from DataSource1 are
sent to SampleFDSCollector1 and SampleFDSCollector2. Files from DataSource2 are sent only to
SampleFDSCollector1.
[/deployment/host/SampleFDS]
ClassName=FileDistributionServiceProcess
...
[/deployment/host/SampleFDS/ResourceManager]
ClassName=CollectorManager
SourceRe-
sourceMapping=DataSource1,SampleFDSCollector1,SampleFDSCollector2
SourceResourceMapping=DataSource2, SampleFDSCollector1

The command below will attempt to create the following SourceResourceMapping configuration
attribute that sends files from DataSource3 to SampleFDSCollector3:
SourceResourceMapping=DataSource3,SampleFDSCollector3
jdscontrol -n SampleFDS -s DataSource3 -r SampleFDSCollector3 -c
addSource

l Use the jdscleanup command to clean up a file distribution service, similar to how the siucleanup
command cleans up a collector.
The following example cleans up a file distribution service named FDS1:
jdscleanup -n FDS1

See the eIUM Command Reference for complete details about these and all eIUM commands.

Page 105 of 332 HP eIUM (8.0)


Chapter 8

Understanding the Data Delivery Agent


This topic describes the Data Delivery Agent components of a collector. The Data Delivery Agent
provides a way for a collector to proactively send data to an application. For complete details on all
eIUM components, see the eIUM Component Reference.

Collector Components and the Data Delivery Agent


A collector consists three components:

l The encapsulator reads the input data and places the data into a Normalized Metered Event or
NME, the data record format used by all eIUM components.
l The aggregator processes the NME data.
l The datastore stores the NME data and formats it for use by other collectors or applications.

You can configure a Data Delivery Agent as part of your datastore to push each flush of NME data
to an external application, such as a billing system, rating system, data control point or service
control point.

The Data Delivery Agent


The Data Delivery Agent provides a way for a collector to proactively send data to an application. It is
similar to the ApplicationDatastore but more flexible. You can use it to replace functions of the
ApplicationDatastore and the NotifyOnFlush datastore feature. Use the delivery agent as a
subcomponent of one of the following datastores:

l Use a data delivery agent under an IDRJDBCDatastore to deliver IDR files to an application.
l Use a data delivery agent under a FileJDBCDatastore to deliver NMEs to an application.
The Data Delivery Agent includes the following components:

l DeliveryFTP sends IDR files from an IDRJDBCDatastore to an FTP server. See "FTP Data Delivery"
(on page 113).

Page 106 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

l DeliveryNMEAgent extracts NMEs from the binary datastore files of a FileJDBCDatastore and
sends them to any component that implements the UsageSendChannel interface, for example
the DeliveryJDBC component. See "JDBC Data Delivery" (on page 114).
l DeliveryJDBC sends NMEs to a JDBC-compliant database. See "JDBC Data Delivery" (on page 114).
Whenever the aggregator flushes NMEs to a datastore that has a configured Data Delivery Agent,
the following happens:

l The datastore notifies the Delivery Agent of the flush.


l The Delivery Agent sends a copy of the flushed usage data and metadata to the Application
Interface. The metadata includes general information about the usage data contained in the
flush.
l The Application Interface sends the data to the external application using whatever mechanism
is appropriate. You must write your own Application Interface that sends data to your application.
You can use the DeliveryNMEAgent and DeliveryJDBC eIUM components to send NMEs to a JDBC
database, or the DeliveryFTP component to send IDR files to an FTP server.

Since every aggregation scheme flushes data to the datastore, every aggregation scheme can have
one or more DeliveryAgents, as shown below. However, if writing to a database, make sure each
delivery agent updates a different table to avoid concurrency issues.

Page 107 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

The DeliveryAgent Component


The DeliveryAgent is an optional component configured under the Datastore. Whenever the
aggregator flushes, the DeliveryAgent sends the flushed data and metadata to the Application
Interface. The DeliveryAgent can send IDR files to the Application Interface or it can send NMEs to
the Application Interface. In either case, it will also send metadata associated with the flush in the
form of a Normalized Metered Event (NME) object. This metadata contains information about the
usage data being sent, such as the flush time, the end time of the data, and optionally some audit
data.

Delivery Agent That Sends Files


To deliver IDR files, you must use an IDRJDBCDatastore, which stores NMEs in a set of IDR files. The
data delivery agent delivers the file containing NMEs. The DeliveryAgent component uses the
DeliveryAgent interface to send the IDR file to the Application Interface. The Application Interface
is implemented by a Transport component, described below. You write the Transport component
specifically for your application. Or you can use the DeliveryFTP component to send the files to
another host running an FTP server. See "FTP Data Delivery" (on page 113).

Page 108 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

Delivery Agent That Sends NMEs


To deliver NMEs you must use a FileJDBCDatastore, which stores NMEs in binary files. The data
delivery agent delivers a binary file so the individual NME records can be extracted. In this case, an
additional component, the DeliveryNMEAgent can be configured under the DeliveryAgent component
to perform this extraction of NMEs from the binary files. The DeliveryNMEAgent uses the
UsageSendChannel interface to send the NMEs to the Application Interface. You can use the
DeliveryJDBC component supplied with eIUM for use with your application. See "JDBC Data Delivery"
(on page 114).

The following table summarizes the Delivery Agent and related components.

Components of File-Based and NME-Based Delivery Agents


Application Interface
Datastore Types Delivery Agent Components Components

IDRJDBCDatastore DeliveryAgent sends IDR files to the Transport component


(Stores NMEs in IDR Application Interface. sends files to the
files.) application.

Page 109 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

Application Interface
Datastore Types Delivery Agent Components Components

FileJDBCDatastore DeliveryAgent sends binary files to the NMEChannel component


(Stores NMEs in binary Transport component. sends NMEs to the
files.) application.
Transport component extracts NMEs and
sends NMEs to the Application Interface.

The Transport Component


The Transport component is configured under the DeliveryAgent. The Transport component receives
files from a collectors Delivery Agent and passes them to a separate application or file. You can
write a custom transport component to send files to your application using the appropriate file
transfer mechanism. Or you can use the DeliveryFTP component to send the files to another host
running an FTP server. See "FTP Data Delivery" (on page 113).

The NMEChannel Component


The NMEChannel component is a component configured under the Transport component. The
NMEChannel component receives NMEs from the Transport component and passes them to a
separate application or file. You can write a custom NMEChannel component to send NMEs to your
application using the appropriate transfer mechanism.

When Audit is enabled on a collector, an NME channel may be used to collect the Audit information
for that collector. By specifying the audit scheme name (AUDIT_DS or AUDITSRC), audit information
may be collected.

NMEChannel may also be used to send eIUM metadata, such as audit, to a remote audit archive
server or application (for example, revenue assurance analysis). The general metadata, with or
without auditing, is always available and may always be sent with its associated usage data for
every flush. Source and input audit data (which includes the audit exception list) exists only when
audit has been enabled; data delivery on source and input audit sets will fail if audit is disabled. Data
delivery for source and input audit sets can be configured by specifying their scheme names
(AuditSrc and Audit_DS, respectively) in the DeliveryAgent attribute.

This component should only be configured when the datastore is a FileJDBCDatastore, or for Audit
schemes.

Configuring the Data Delivery Mechanism


You configure the data delivery mechanism at the datastore. The following shows the structure of
the configuration of these components for a file-based data delivery mechanism. The shaded blocks
are standard eIUM components, and the white blocks are custom Application Interface components.

Page 110 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

The following diagram shows the structure of the configuration of these components for an NME-
based data delivery mechanism.

NOTE: The siucleanup command does not affect the destination of NMEs or files from the
DeliveryAgent. You are responsible for cleanup and aging of the data in the destination.

Recovery Behavior
With every file or NME the delivery agent sends, it waits for confirmation that the delivery was
received. If the collector goes down, it resends any data that was sent but not yet confirmed.

Delivery Agent Configuration Location


The Delivery Agent configuration must be at the following location in the configuration tree:

[/deployment/<host>/<collector>/Datastore/<deliveryAgentNode>]

where <deliveryAgentNode> is defined in the DeliveryAgent configuration attribute of the datastore.


See the IDRJDBCDatastore, FileJDBCDatastore and DeliveryAgent descriptions in the eIUM
Component Reference for complete details.

Transport Component Configuration Location


The transport mechanism configuration must be at the following location in the configuration tree:

Page 111 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

[/deployment/<host>/<collector>/Datastore/<deliveryAgentNode>/<transportNode>]

where <deliveryAgentNode> is defined in the DeliveryAgent configuration attribute of the datastore


and <transportNode> is defined in the Transport configuration attribute of the delivery agent.

Transport Component Configuration Attributes


You create the transport component Java class to send files to your application from the IUM
collector. Use the ClassName attribute to specify the Java class of your transport component. You
can define additional configuration attributes as needed for your particular implementation.

Transport Component for NME Delivery: DeliveryNMEAgent


Use the DeliveryNMEAgent to send NMEs to the consuming application. The specified NMEChannel
component must implement the UsageSendChannel interface to receive the NME data.

Example 1: Sending NMEs


The following example shows configuration of a datastore, a delivery agent, and an Application
Interface that send NMEs to an outside application. The datastore is a binary file
(FileJDBCDatastore). Whenever a flush occurs, the datastore notifies the delivery agent. The delivery
agent sends the just flushed NMEs to the transport component, which forwards the NMEs to the
NMEChannel which sends the NMEs to the outside application.

[/deployment/host01/FileDemoCollector/Datastore]
ClassName=FileJDBCDatastore
...
DeliveryAgent=myDA1,scheme1

# Below is the Delivery Agent. It delivers the binary files


# to the Transport component, at subnode nme.
[/deployment/host01/FileDemoCollector/Datastore/myDA1]
ClassName=DeliveryAgent
Transport=nme

# Below is the Transport component for NME delivery,


# DeliveryNMEAgent. It takes
# the IUM binary file and breaks it up into NME records. It
# then passes those records to the NMEChannel component, at
# subnode myWrapper.
[/deployment/host01/FileDemoCollector/Datastore/myDA1/nme]
ClassName=DeliveryNMEAgent
NMEChannel=myWrapper

# Below is the NMEChannel component. It sends the NME


# data to the Portal Application.
[/deployment/host01/FileDemoCollector/Datastore/myDA1/nme/myWrapper]
ClassName=mytools.delivery.PortalInterface
...

Example 2: Sending Files


The following example shows the configuration for an IDRJDBCdatastore that will deliver fixed width
text files to a remote host using a user-defined file transport utility named myFileDelivery.

Page 112 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

[/deployment/host01/Demo/Datastore]
ClassName=IDRJDBCDatastore
IDRType=FixedWidth
DeliveryAgent=deliveryAgent,scheme1
...

# Below is the DeliveryAgent that will deliver the files


# to the Transport component.
[/deployment/host01/Demo/Datastore/deliveryAgent]
ClassName=DeliveryAgent
Transport=fileDelivery
BlockAging=true
RetryCount=4
RetryInterval=0d0h5m0s
FailureAction=ERROR

# Below is the Transport component that will send the files


# to a remote host.
[/deployment/host01/Demo/Datastore/deliveryAgent/fileDelivery]
ClassName=myFileDelivery
Server=host01.hp.com
UserName=operator
PassWord=mypassword
DestinationDir=idrData
CacheDir=idrData/temp

FTP Data Delivery


The Data Delivery Agent can be configured with an FTP Delivery Agent component. This component
allows the data from a collector to be sent to another host using the FTP protocol.

The DeliveryFTP component handles the file delivery from eIUM to a specified FTP host running an
FTP server. The user and password is specified in the configuration, and used to establish a
connection to the server. The files will be sent to a specified directory on the server.

Additional characteristics of the FTP transport can be specified. For example, if the file should be
cached before being delivered to the final destination, a temporary directory can be specified that is
used for the file transfer. After the transfer has completed successfully, DeliveryFTP will use the FTP
rename command to move the file to its final destination. The reason this is typically done is to
provide an atomic operation (on most operating systems, the rename will not be interrupted) for
the file delivery. This prevents any other operations from being performed before the file transfer is
complete.

Connection time-out control is also possible, allowing the connection to be automatically broken for
each delivery (flush), or to attempt to maintain the connection during the collector session.

The DeliveryFTP Component


The DeliveryFTP Module is a Transport component configured under the DeliveryAgent. This will
specify the destination for the delivery to be an FTP server.

DeliveryFTP Configuration Location


The DeliveryFTP configuration must be at the following location in the configuration tree:

Page 113 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

[/deployment/<host>/<collector>/Datastore/<deliveryAgentNode>/<deliveryFtp>]

where <deliveryAgentNode> is defined in the DeliveryAgent configuration attribute of the datastore,


and <deliveryFTP> is the Transport specified by the Transport attribute of the
<deliveryAgentNode>.

Example: FTP Delivery


The following example shows the configuration for an IDRJDBCdatastore that will deliver fixed width
text files to a remote host using FTP.

[/deployment/host01/Demo/Datastore]
ClassName=IDRJDBCDatastore
IDRType=FixedWidth
DeliveryAgent=deliveryAgent,scheme1
...

# Below is the DeliveryAgent that will deliver the files


# to the Transport component.
[/deployment/host01/Demo/Datastore/deliveryAgent]
ClassName=DeliveryAgent
Transport=ftpDelivery
BlockAging=true
RetryCount=4
RetryInterval=0d0h5m0s
FailureAction=ERROR

# Below is the Transport component that will send the files


# to a remote host via FTP.
[/deployment/host01/Demo/Datastore/deliveryAgent/ftpDelivery]
ClassName=DeliveryFTP
FTPServer=host01.hp.com
UserName=operator
PassWord=mypassword
DestinationDir=idrData
CacheDir=idrData/temp
[/deployment/host01/Demo/Datastore/deliveryAgent/Header]
...

JDBC Data Delivery


The Data Delivery Agent can be configured with a JDBC Delivery Agent component. This component
allows the data from a collector to be sent to any JDBC-compliant database.

The DeliveryJDBC component is identical in function to the ExternalJDBCDatastore in that it offers a


flexible way to distribute data into a table or set of tables in a JDBC compliant database. It will map
the individual NME attributes to specified columns in any database table. However, since it runs as a
Delivery Agent, and not as a Datastore, it will allow the collector operations to continue even when
the target database is not available. Unlike the ExternalJDBCDatastore, DeliveryJDBC has the ability
to recover from a lost database connection, and can then resume delivery once the database
becomes available again.

Page 114 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

The DeliveryJDBC Component


The DeliveryJDBC component is an NMEChannel component configured under the
DeliveryNMEAgent. This specifies the ExternalSchema nodes that describe the mapping between the
NME attributes and the database table and columns.

DeliveryJDBC Configuration Location


The DeliveryJDBC configuration must be at the following location in the configuration tree:
[/deployment/<host>/<collector>/Datastore/<deliveryAgentNode>
/<transportNode>/<deliveryJDBC>]

where <deliveryAgentNode> is defined in the DeliveryAgent configuration attribute of the datastore,


<transportNode> is the Transport component defined in the Transport attribute of the
<deliveryAgentNode>, and <deliveryJDBC> defines the DeliveryJDBC component. The Transport node
should be set to the DeliveryNMEAgent component. See "Configuring the Data Delivery Mechanism"
(on page 110) for a diagram of this configuration structure.

eIUM SQL Data Type Mappings


The following table shows how eIUM attribute types correspond to SQL data types.

eIUM Attribute SQL Data Type Mappings


SQL Data Types and Extensions

NME Attribute Java Native Standard JDBC SQL Overwrite


Type Data Types Data Types Defaults Defaults

IntegerAttribute int java.sql.Types.INTEGER INTEGER, INTEGER,


NUMBER NUMBER

LongAttribute long java.sql.Types.BIGINT VARCHAR, VARCHAR,


INTEGER, INTEGER,
NUMBER NUMBER,
BIGINT,
LONG

IPAddrAttribute int java.sql.Types.INTEGER INTEGER, INTEGER,


NUMBER NUMBER,
VARCHAR

TimeAttribute int java.sql.Types.INTEGER INTEGER, INTEGER,


NUMBER, NUMBER,
DATE, DATE,
TIMESTAMP VARCHAR

StringAttribute Java.lang. java.sql.Types.VARCHAR VARCHAR, VARCHAR,


String LONGVARCHAR LONGVARCHAR

FloatAttribute float java.sql.Types.FLOAT FLOAT, FLOAT,


REAL, REAL,
NUMBER NUMBER

Page 115 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

SQL Data Types and Extensions

NME Attribute Java Native Standard JDBC SQL Overwrite


Type Data Types Data Types Defaults Defaults

DoubleAttribute double java.sql.Types.DOUBLE DOUBLE, DOUBLE,


FLOAT, FLOAT,
NUMBER NUMBER

Page 116 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 8: Understanding the Data Delivery Agent

Page 117 of 332 HP eIUM (8.0)


Chapter 9

Using Structured NMEs


A large voice network typically has many voice switches. Each switch generates files of usage data in
the form of call detail records or CDRs. CDRs are often formatted using the ASN.1 (Abstract Syntax
Notation 1) and encoded using BER (Basic Encoding Rules). These file formats define complex
hierarchical data structures. HP Internet Usage Manager (eIUM) mediation system can read and
process CDRs in these or virtually any other complex data format, such as AMA BAF or CIBER.

eIUM normalizes CDRs or any other input data into NMEs (Normalized Metered Events). All eIUM
components understand and operate on NMES. NMEs contain not only the data from the CDR, but
also the hierarchical structure of the data and information about which data fields are set. When
you define your NMEs, you can specify which data fields are optional which helps facilitate data
validation. NMEs are a powerful data structure that let you efficiently perform your business logic
and support your business applications.

This topic series describes how to use structured NMEs to read and process complex hierarchical
CDRs.

Overview of NMEs 119

Traditional NMEs and the NME Schema 119

Structured NMEs 120

Defining Structured NMEs in the NME Schema 120

NMEAdapters 120

Operating on NMEAdapters 121

Key Points 123

Defining Structured NME Types 123

Templates for Collectors Using Structured NMEs 123

Components of Structured NME Types 124

Syntax of NME Type Definitions in the NME Schema 124

Namespaces 125

Type Aliases 126

Structured NME Types 128

Example NME Type Definition 129

Loading NME Type Definitions 130

Example 130

Using Structured NMEs 131

Example 131

Page 118 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

Moving to Structured NMEs 132

Using NMEAdapters in a New Deployment 132

Using NMEAdapters in an Existing Deployment 132

Overview of NMEs
NMEs, or Normalized Metered Events, are the internal data structure used by all eIUM components.
All data read and processed by eIUM are converted to NMEs. All components previous to the IUM 4.5
Feature Pack 2 release used traditional NMEs. As of IUM 4.5 Feature Pack 2, IUM uses structured
NMEs and several new components that operate on structured NMEs and enable interoperability
with existing NMEs and with all existing components. This section describes both traditional NMEs
and structured NMEs and shows how you can use them to implement your mediation solution.

Traditional NMEs and the NME Schema


Traditional NMEs are flat one-dimensional records with no hierarchical structure. Any traditional
component, such as an encapsulator, that reads structured data converts the data into a flat
structure minus the information about the structure of the input data. In addition, traditional NMEs
have no capacity to handle optional data fields and they have no way of indicating whether a
particular data field has been set or not.

Traditional NMEs are defined by the NME schema and the parser component. The NME schema is a
data dictionary that defines all the possible fields of the data record. For traditional NMEs, the NME
schema refers to the list of name, type pairs defined for an eIUM deployment. Each name is a
user-defined string and each type is a basic data type like integer, string, time, IP address and so
forth. In the NME schema, the types are IntegerAttribute, StringAttribute, TimeAttribute,
IPAddressAttribute and so forth. These name, type pairs are defined in configuration at the
configuration node /NMESchema and loaded automatically into the configuration server. For
example, the following shows a few NME attributes defined in the NME schema:

[/NMESchema]
Attributes=Type,com.hp.siu.utils.IntegerAttribute
Attributes=StartTime,com.hp.siu.utils.TimeAttribute
Attributes=EndTime,com.hp.siu.utils.TimeAttribute
Attributes=SrcIP,com.hp.siu.utils.IPAddrAttribute
Attributes=DstIP,com.hp.siu.utils.IPAddrAttribute
Attributes=SrcPort,com.hp.siu.utils.IntegerAttribute
...

NME records themselves are composed of one or more of these name, type pairs that are
assembled into an NME. The parser typically lists the NME attributes contained in the NME.

The figure below shows a traditional NME comprising eight data fields, or NME attributes. The NME
attributes are of various types: time, IP address, integer (32 bits) and long (64-bit integer).

All eIUM components previous to the IUM 4.5 Feature Pack 2 release support traditional NMEs. The
IUM 4.5 Feature Pack 2 release and later continues support for traditional NMEs, but it also adds
structured NMEs.

Page 119 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

Structured NMEs
Structured NMEs are a substantial improvement over traditional NMEs.

l Structured NMEs store complete information about the hierarchical arrangement of complex
data records.
l Structured NMEs contain information about which data fields have been set and which have not.
l Structured NMEs can handle optional data fields.
Structured NMEs retain all the hierarchical information of structured data. For example, the
following shows a structured NME that consists of an NME and a sub-NME:

The top level NME consists of four NME attributes: CreateTime, NumRecord, FileName and Data.
The NME attribute named Data is actually a logical reference to another NME sometimes called a
sub-NME. The sub-NME consists of eight NME attributes: StartTime, EndTime, SrcIP and so forth.

A slightly more complex structured NME is shown in the diagram below. This structured NME has the
same basic structure, but in this case the sub-NME is actually an array of three NMEs:

Structured NMEs were introduced in the IUM 4.5 Feature Pack 2 release. Several components were
also introduced that work on structured NMEs and enable interoperability with all traditional eIUM
components.

Defining Structured NMEs in the NME Schema


You define structured NMEs in a new style of NME schema configuration, and then use the
NMESchemaLoader component to load the NME schema. This makes all components aware of and
able to operate on your structured NMEs. For more information, see "Defining Structured NME
Types" (on page 123) and "Loading NME Type Definitions" (on page 130).

NMEAdapters
eIUM implements structured NMEs in NMEAdapters. NMEAdapters are a new type of NME that
contains both a traditional one-dimensional NME part the same as NMEs in all previous versions of
eIUM, plus a structured NME part that contains arbitrarily complex hierarchical data such as that
commonly found in voice CDRs. The traditional NME part contains a hidden pointer to the structured
NME part. The diagram below shows an NMEAdapter and how it consists of a traditional NME part
and a structured NME part:

Page 120 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

Note that the traditional NME part contains a Hidden Pointer that points to the structured NME
part. This hidden pointer connects the traditional NME part to the structured NME part and enables
interoperability between traditional NMEs and structured NMEs.

Operating on NMEAdapters
All existing eIUM components operate on the traditional NME part and are unaware of the hidden
pointer and the structured NME part. New components that operate on NMEAdapters can operate
on both the traditional NME part and the structured NME part.

To operate on structured NMEs, you must use components that are aware of and operate on
NMEAdapters. The following are some of the components that operate on NMEAdapters.

NME Schema Loader


With traditional NMEs, the NME is typically defined in the parser of the encapsulator. That is, the
parser configuration lists all of the NME attributes that the parser populates with values from the
incoming data record. All NME attributes are defined by the NME schema and automatically loaded
by your eIUM deployment.

For structured NMEs, you must define structured NME types in a separate configuration entry and
you must configure the NMESchemaLoader component to load them. The NMESchemaLoader is a
component you configure as part of your collectors. The NMESchemaLoader reads structured NME
type definitions from the configuration and loads them into the NME schema.

For more information, see "Defining Structured NME Types" (on page 123) and "Loading NME Type
Definitions" (on page 130).

Encapsulators
Encapsulators that work with NMEAdapters read structured data and generate a structured NME in
an NMEAdapter. All other encapsulators generate traditional one-dimensional NMEs. Below are
some of the encapsulators that generate NMEAdapters. For more information on these
components, see the eIUM Component Reference.

Page 121 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

l BERFileEncapsulator reads ASN.1 data encoded in BER format, generates an NMEAdapter and
places the data into the NMEAdapter.
l NortelUMTSEncapsulator reads Nortel UMTS version 3.0 format data files, generates
NMEAdapters and places the data into the NMEAdapters.
l SNMEReaderFactory is used with the NMEFileEncapsulator to read structured NMEs from binary
files created by a FileJDBCDatastore and an SNMEWriterFactory. That is, the
NMEFileEncapsulator and SNMEReaderFactory components together read structured NMEs from
a FileJDBCDatastore and generate NMEAdapters.

Rules
Traditional rules work with traditional NMEs but can also work with the traditional NME part of
NMEAdapters. Some of the rules and related components that work with the structured NME part of
NMEAdapters are listed below. For more information on these components, see the eIUM Component
Reference.

l NMEConversionRule works with NMEAdapters and copies attribute values from the structured
NME part to the traditional NME part, or vice versa. This rule is like the AdornmentRule in that it
copies NME attribute values. In addition, this rule can convert a traditional NME to an
NMEAdapter and add a structured NME part. In this case, it can also copy data values from the
traditional NME part to the structured NME part.
The NMEConversionRule is essential because there are not many rules that work with
NMEAdapters, and the NMEConversionRule enables you to use the full set of existing rules with
NMEAdapters. By copying the data from the structured NME part to the traditional NME part, all
existing eIUM rules can operate on the data in the traditional NME part of the NMEAdapter. After
these rules implement your business logic, you use the NMEConversionRule once again to copy
the data values from the traditional NME part to the structured NME part.
l SplitArrayRule works with NMEAdapters and splits an array of sub-NMEs in the structured NME
part into separate individual NMEs. It sends each of the split NMEs to the rule following the
SplitArrayRule. This allows you to process each NME in an array of NMEs separately. Alternatively,
you can define a side rule chain. This rule sends each of the split NMEs to the side rule chain,
then sends the full NME (as modified by the side rule chain) on to the rule following the
SplitArrayRule.
l SNMEValidationCheck is a condition of a FilterRule or a ConditionalRule that checks whether all
required attributes in a structured NME are set. It validates the structured NME part of an
NMEAdapter. If all the required attributes of the structured NME are set, this component returns
a value of true. If any required attribute is not set, this component returns false.
You must explicitly designate NME attributes as optional in the structured NME type definition. By
default, NME attributes are required to be set. Required NME attributes are all those not
designated optional in the NME type definition in the NME schema.
Use the SNMEValidationCheck as a condition to a FilterRule or a ConditionalRule. To configure
this component with a FilterRule or ConditionalRule, you must configure it as a subcomponent of
the rule and name the configuration subnode Condition. See the FilterRule and the
ConditionalRule in the eIUM Component Reference for more information.

Page 122 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

Datastores
Traditional datastores store traditional NMEs. To store NMEAdapters, use a FileJDBCDatastore
along with an SNMEWriterFactory to write NMEAdapters to the datastore. That is, the
FileJDBCDatastore and the SNMEWriterFactory together write structured NMEs to a datastore. The
FileJDBCDatastore stores data in binary files.

You must use an NMEFileEncapsulator with an SNMEReaderFactory to read structured NMEs from
the binary files created by the SNMEWriterFactory. For more information on these components, see
the eIUM Component Reference.

Key Points
Below are the key points you need to know about NMEAdapters. Keep these in mind as you begin
working with NMEAdapters.

l Structured NMEs are implemented as a part of NMEAdapters. An NMEAdapter consists of a


structured NME part and a traditional NME part.
l First-level collectors use an encapsulator that reads structured data and generates
NMEAdapters. The data is only in the structured NME part of the NMEAdapter. The traditional
NME part contains no data. See the BERFileEncapsulator and the NortelUMTSEncapsulator in the
eIUM Component Reference for examples.
l Use the NMEConversionRule to copy data values from the structured NME part to the traditional
NME part.
l Use any traditional rules to operate on the data in the traditional NME part of the NMEAdapter.
All traditional rules work only with the traditional NME part of the NMEAdapter.
l Use the NMEConversionRule to copy data values from the traditional NME part to the structured
NME part.
l Save the NMEAdapters in the datastore by using the SNMEWriterFactory in conjunction with a
FileJDBCDatastore.
l Second and higher level collectors can read in NMEAdapters from another collector by using the
NMEEncapsulator in conjunction with the SNMEReaderFactory.

Defining Structured NME Types


This section describes how to define the structured NME types that correspond to your input data.
eIUM includes collector templates for many voice switches and other devices that produce
structured CDRs so in many cases you do not need to define these yourself. However, this section
can be helpful if you need to read or modify the NME type definitions provided in the collector
templates.

Templates for Collectors Using Structured NMEs


The following templates generate NMEAdapters. See the eIUM Template Reference for descriptions
of eIUM templates.

l GSM - Ericsson R10 MSC (structured NME support) - This is a template for a collector that
parses MSC charge data records from BER-encoded ASN.1 files produced by Ericsson AXE-10 R10
exchanges.

Page 123 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

l GSM - Ericsson R9 MSC (structured NME support) - This is a template for a collector that parses
MSC charge data records from BER-encoded ASN.1 files produced by Ericsson AXE-10 R9
exchanges.
l GSM - Ericsson TAP3 Roaming (structured NME support) - This is a template for a collector that
parses TAP3 records from BER-encoded ASN.1 files produced by Ericsson AXE-10 exchanges.

Components of Structured NME Types


NMEs or Normalized Metered Events are the internal record format used by all eIUM components.
You define the structure and content of NMEs based on the input data you are reading, which
typically comes from network devices such as voice switches and IP routers, and applications such
as video servers, email servers, web servers and so forth.

Traditional eIUM NMEs are flat data records with no hierarchical information. They are defined in the
NME schema and in the parser component, as described in Traditional NMEs and the NME Schema
on page 171.

Structured NMEs are data records that include hierarchical information, such as arrays and
subrecords. Structured NMEs can also contain information about which data values have been set
and which data values are optional. You can use this information to validate the incoming data.

Structured NME types define a record structure. The structure can consist of any of the following
four elements:

l Primitive types, which are boolean, byte, short, int, long, char, float and double as defined in Java.
l Other NME types.
l Array types, which are sequences of primitive types or sequences of other NME types.
l String type, which is a predefined array of chars.

Syntax of NME Type Definitions in the NME Schema


You define structured NMEs in a new configuration format in the NME schema. With structured
NMEs, the NME schema contains both the NME field definitions (the field names and types as in
traditional NMEs) and the structure of the NME record. The NME record can consist of data fields
(NME attributes), pointers to other NME records, and arrays of other NME records.

You define structured NMEs in the NME schema configuration. The NME schema has three kinds of
entries. The first two, name spaces and type aliases, define names that help organize and simplify
your structured NME definitions. The third type, NME types, actually defines your structured NME
records.

l Name spaces define a unique naming context. Name spaces contain one or more type aliases
and one or more NME types.
l Type Aliases define additional names for types.
l NME types define the NME attributes and the record structure of your structured NMEs.
All these entries are stored in the configuration server, typically under the /SNMESchema
configuration node. Here is an example of a simple structured NME schema:

[/SNMESchema]

[/SNMESchema/HP_IUM]
TypeAliases=Port,int

Page 124 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

[/SNMESchema/HP_IUM/MyNMEType]
Attributes=StartTime,Time
Attributes=EndTime,Time
Attributes=SrcIP,IPAddress
Attributes=DstIP,IPAddress
Attributes=SrcPort,Port,optional
Attributes=DstPort,Port,optional
Attributes=NumBytes,long
Attributes=NumPackets,int

The following diagram shows this NME schema as it appears in the Launchpad. The Launchpad
illustrates the structure more visibly.

The above NME schema shows the following:

l The root node for this NME type definition is at /SNMESchema.


l It defines one name space HP_IUM.
l It defines one type alias, Port, as simply another name for the type int.
l It defines one NME type MyNMEType. This NME type is in the HP_IUM name space. MyNMEType
is an NME with the following eight data fields of the indicated type:

Notice that this structured NME is actually a single-level structure that could be defined with
traditional NMEs. The next example shows more structure.

The types Time and IPAddress are predefined type aliases and are defined as 32-bit integers. See
"Type Aliases" (on page 126) for more information.

Note also that SrcPort and DstPort are marked as optional attributes. This can be used by the
SNMEValidationCheck to find out if any non-optional attributes were not set. See the
SNMEValidationCheck in the eIUM Component Reference for details.

Namespaces
The structured NME schema can have one or more namespaces. Each is configured as a sub-node
under the root NME schema node, typically /SNMESchema. The name of the sub-node is the name

Page 125 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

of the namespace and must be unique. Valid names must start with a letter, and consist of letters,
numbers, the $ or _ characters. White space and . are not allowed.

Under each namespace you can define type aliases and NME types that belong to that namespace.

For example, the following shows an NME schema that has three namespaces:

[/SNMESchema/VoiceNameSpace]
...
[/SNMESchema/MobileNameSpace]
...
[/SNMESchema/IPNameSpace]
...

Type Aliases
Type aliases simply give another name to an existing type. You can only define type aliases on
primitive types or previously defined type aliases in the same namespace. You cannot define aliases
on NME types, on array types, or on types in another namespace.

Use aliases to make your NME type definitions clearer and easier to read. You define a type alias in a
namespace with the attribute TypeAliases under a particular <Name space> node. The TypeAliases
attribute is optional and multi-valued.

The deployment editor diagram in "Syntax of NME Type Definitions in the NME Schema" (on page
124) shows a type alias named Port that defines another name for the type int. The NME attributes
SrcPort and DstPort use this type alias. This type alias is defined in the HP_IUM name space and can
be used by any attribute in the namespace.

Syntax
The syntax for type aliases is as follows:
TypeAliases=<Alias name>,<Existing type>

<Alias name> is the new name being defined.

<Existing type> is the name of a primitive type or alias in the same name space.

Once an alias is defined, you can use it to declare attributes. If the attribute is in the same name
space, just use the alias directly. For example, the following declares usage to be another name
for int:
TypeAliases=usage,int

The following defines the NME attribute NumBytes to be of type usage, which is an integer:
Attributes=NumBytes,usage

If the alias you want to use is in another name space, use the qualified name of the type alias by
prepending its name space as shown below.
Attributes=NumBytes,<name space>:usage

For example, if the alias usage were defined in the name space mobile, you would use the
following to define an NME attribute:
Attributes=NumBytes,mobile:usage

Page 126 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

Primitive NME Attribute Types


eIUM defines the following basic types you can refer to directly in your NME type definitions:

Primitive NME Attribute Types


Primitive
Data Type Description

boolean True or false logical values.

byte 8-bit numeric values.

short 16-bit numeric values.

int 32-bit numeric values.

char 8-bit character values.

long 64-bit numeric values.

float 32-bit floating-point numeric values.

double 64-bit floating-point numeric values.

Predefined Type Aliases


eIUM defines the following type aliases you can refer to directly in your structured NME type
definitions:

Type Alias Description

Time Same as int, used for time values.

TextString Same as char[].

ASCIIString Same as byte[].

IPAddress Same as int, used for IP address values.

UUID Same as byte[].

URL Same as char[].

Other Attribute Types


eIUM also defines the following basic types you can refer to directly in your structured NME type
definitions:

Other NME Attribute Types


Data Type Description

String Sequences of character values.

Page 127 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

NOTE: It is not recommended that you overwrite factory predefined type names as this can be
error prone, difficult to maintain and confusing to read. However, if a factory predefined alias is
overwritten and you want to define an attribute with the original factory definition, prepend a
colon (:) in front of the alias. For example, if Time is redefined as a long in your name space, to
declare an attribute with the original factory definition which is int, use
attributes=StartTime,:Time.

Structured NME Types


The NME schema configuration defines one or more structured NME types. Each structured NME
type definition must be specified in a name space and each is configured as a subnode of the name
space node. The name of a structured NME type must be unique within its name space. A valid type
name has the same rules for naming as name spaces.

A qualified name of an NME type is of the format <name space>:<name>. Using this format, you
can uniquely identify any NME type across all name spaces. In the deployment editor diagram in
"Syntax of NME Type Definitions in the NME Schema" (on page 124), the qualified name of
MyNMEType is HP_IUM:MyNMEType.

Each NME type configuration node has only one configuration attribute called Attributes that
defines the list of attributes in NMEs of that type. It is a required attribute and can be multi-valued.

Syntax to Define NME Attributes


You define each NME attribute in an NME type with the Attributes configuration attribute. The
syntax of this attribute is:
Attributes=<Attribute name>,<Attribute type>[,optional]

<Attribute name> is the user-defined name you give to the data field in the NME.

<Attribute type> is the type you assign to the data field corresponding to the type of the
incoming data.

The third parameter specifies that the attribute is optional. This can be used for validation of the
input data, for example by the SNMEValidationCheck. If the attribute is required, leave off the third
parameter.

To define an attribute you can use any of the following for <Attribute type>:

l Primitive types. These are defined by eIUM, for example boolean, int, byte, short char, long, float,
double, and string, as well as the array form which is just the primitive type followed by the []
array notation. See "Primitive NME Attribute Types" (on page 127).
l Types defined by type aliases. These are defined in your NME schema by the TypeAliases
attribute. You can also use one of the eIUM predefined type aliases. See "Predefined Type
Aliases" (on page 127).
l Another NME type. These are defined in your NME schema.
l An array of another NME type. These are any NME type definition followed by the [] array
notation.
Using these NME types, you can form an NME type representing structured data records. If the
referenced NME type is in the same name space, you can refer to it by its simple name or by its

Page 128 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

qualified name. If the NME type belongs to a different name space, you must use the qualified
name.

NOTE: A circular dependency can occur when NME types declare attributes as other NME types.
When this happens, the NMESchemaLoader is able to detect and log this situation, and the
NMETypes that contributed to the circular dependency are not added into the NME schema.
Check the collector log file for possible messages.

Example NME Type Definition


The following is an example of a structured NME type definition.

[/SNMESchema]

[/SNMESchema/HP_IUM]
TypeAliases=Port,int

[/SNMESchema/HP_IUM/MyNMEType]
Attributes=StartTime,Time
Attributes=EndTime,Time
Attributes=SrcIP,IPAddress
Attributes=DstIP,IPAddress
Attributes=SrcPort,Port,optional
Attributes=DstPort,Port,optional
Attributes=NumBytes,long
Attributes=NumPackets,int

[/SNMESchema/HP_IUM/MySummaryType]
Attributes=CreateTime,Time
Attributes=NumRecord,int
Attributes=FileName,string
Attributes=Data,MyNMEType[]

The following is how this structured NME type definition appears in the Launchpad.

Page 129 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

This example defines two NME types, MyNMEType and MySummaryType. MyNMEType is defined as in
the deployment diagram in "Syntax of NME Type Definitions in the NME Schema" (on page 124).
MySummaryType contains four elements. The first three are simple elements of type Time, integer
and string. The fourth element, Data, is actually an array (indicated by the [] notation) of NMEs
(indicated by the type specified as MyNMEType). This schema defines a hierarchical data record as
show in the following diagram:

If an incoming data record contained an array of three subrecords, the structured NME would look
like the following diagram:

Loading NME Type Definitions


The traditional NME schema is loaded automatically when you start eIUM. You do not have to do
anything explicitly. With structured NMEs, however, you must configure the NMESchemaLoader
component in your collectors to load the NME schema you plan to use. This makes the NME types
available to the components that use them. The NMESchemaLoader reads structured NME type
definitions from the configuration and loads them into the NME schema. Other eIUM components
then use the NME schema to operate on the NME. You specify the NME type definitions using the
Schema attribute. You configure the NMESchemaLoader in the collector. See the NMESchemaLoader
in the eIUM Component Reference for complete details.

Example
The following is an example of a collector that loads and uses a structured NME. Notice that rather
than explicitly defining the NME schema under the /mySNMESchema subnode, it uses a Link
attribute that refers to the /SNMESchema node. See "Linked Collectors" (on page 53) for details
about using links.

# Collector requires a structured NME schema loader


[/deployment/host01/myCollector]
ClassName=com.hp.siu.adminagent.procmgr.CollectorProcess
SNMESchemaLoader=snmeloader

# Define structured NME schema loader


[/deployment/host01/myCollector/snmeloader]
ClassName=com.hp.usage.nme.schemaloader.NMESchemaLoader
Schema=mySNMESchema

# Structured NME schema definition

Page 130 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

[/deployment/host01/myCollector/snmeloader/mySNMESchema]
Link=/SNMESchema

Using Structured NMEs


This section describes how to reference NME attributes in structured NMEs. Since structured NMEs
can be complex data structures, a simple method of referencing NMEs and arrays contained within
other NMEs or arrays is required.

To refer to the individual fields (NME attributes) of the structured NME part of an NMEAdapter, use
the following notation:
<NME attribute>.<sub-NME attribute>

Use the following notation for NME attributes that are arrays:
<NME attr>.<sub-NME attr>[<index>]

Add more operators to refer to deeper structures, for example:


<NME attr>.<sub-NME attr>[<index>].<sub-NME attr>

Example
For example, the following shows a structured NME type definition:

[/SNMESchema/Namespace/NMEtype1]
Attributes=StartTime,Time
Attributes=EndTime,Time
Attributes=Data,UsageRecord[]

[/SNMESchema/Namespace/UsageRecord]
Attributes=SrcPort,int
Attributes=DstPort,int
Attributes=NumBytes,int

The first or top-level NME, NMEtype1, contains three attributes: StartTime, EndTime and Data.
StartTime and EndTime are of type Time, which is a time stamp. The Data attribute is actually an
array of NMEs of type UsageRecord. The NME type UsageRecord contains three attributes: SrcPort,
DstPort and NumBytes, all of type int, which is a 32-bit integer. If an incoming data record contained
only one UsageRecord, it would look like this:

You would refer to the SrcPort attribute as follows:


Data[0].SrcPort

If an incoming data record contained three UsageRecords, it would look like this:

Page 131 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

You would refer to the NumBytes attribute in the third row of UsageRecord as follows:
Data[2].Numbytes

The structured NME part of the NMEAdapter is defined in the NME schema and loaded by the
NMESchemaLoader component. For details, see the NMESchemaLoader in the eIUM Component
Reference.

Moving to Structured NMEs


This section describes how to design a new deployment using structured NMEs and how to add
structured NMEs to an existing eIUM deployment.

Using NMEAdapters in a New Deployment


If you are designing a completely new eIUM deployment, the decision to use traditional NMEs or
NMEAdapters depends on your input data source. If your input data source produces structured data
and you need to keep the structure of the data, you must use NMEAdapters.

If you do not need to keep the structure of the input data, you can use traditional NMEs.

If your input data is not structured, that is, if it consists of simple one-dimensional records, you can
use traditional NMEs. However, even in this scenario you can create NMEAdapters from the
traditional NMEs with the NMEConversionRule.

Using NMEAdapters in an Existing Deployment


Using structured NMEs in an existing deployment can be done in any of several different ways. You
will need to carefully examine your current NMEs and design the change to add NMEAdapters. The
following are general suggestions to keep in mind.

l You can add a collector after your last collector that simply converts your data to NMEAdapters
and sends the data to your business application. This new collector would read traditional NMEs
from another collector, use the NMEConversionRule to convert the NMEs to NMEAdapters and
copy the data to the structured NME part.
l You can insert an NMEConversionRule into an existing rule chain to create an NMEAdapter and
copy data values to either part of the NMEAdapter. Subsequent rules operate on the traditional
NME part of the NMEAdapter. Replace the datastore with a FileJDBCDatastore and
NMEWriterFactory to save the NMEAdapters in the datastore. Subsequent collectors must use
the NMEFileEncapsulator and NMEReaderFactory to read NMEAdapters and the
FileJDBCDatastore and NMEWriterFactory to save NMEAdapters.
l You can add a complete new set of collectors that use structured NMEs independent of your
existing collectors. The first-level collector uses a new encapsulator to read your input data,
generate NMEAdapters and save them in a FileJDBCDatastore. All higher level collectors would

Page 132 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 9: Using Structured NMEs

use the NMEFileEncapsulator and NMEReaderFactory to read NMEAdapters. An


NMEConversionRule at the beginning of the rule chain copies data values from the structured
NME part to the traditional NME part. Another NMEConversionRule at the end of the rule chain
copies the data values from the traditional NME part to the structured NME part. Finally a
FileJDBCDatastore and NMEWriterFactory writes out the NMEAdapters.

Page 133 of 332 HP eIUM (8.0)


Chapter 10

Using the Structured NME Schema Editor


This topic series describes how to use the structured NME schema editor to define structured NMEs
that correspond to complex, hierarchical CDRs.

The discussion assumes you have a basic understanding of structured NMEs and understand the
differences between traditional NMEs and structured NMEs. For an introduction to structured NMEs
see "Using Structured NMEs" (on page 118).

Starting the NME Schema Editor 134

Name Spaces, Type Aliases and Structured NME Types 136

Building a Structured NME Schema 137

Adding a New Namespace and NME Type 137

Adding NME Attributes 138

Saving Your NME Types 140

Adding Type Aliases 140

Adding Arrays and Optional Attributes 141

Starting the NME Schema Editor


Start the NME schema editor from the Launchpad menu Tools -> Deployment Builders -> Schema
Editor. This runs the schema editor and displays the existing structured NME definitions that are in
the configuration server. The schema editor is divided into two panes, the left-hand pane shows the
namespaces and NME schemas defined in each namespace. The right-hand pane shows the selected
NME record definition.

The following diagram shows five namespaces defined in the NME schema. The namespaces are
EricssonMessage, CreditControlApplication, EricssonGSMV10, TAP_0302 and EricssonGSMV9. These
namespaces are predefined by eIUM for specific data sources and are used by templates described
in the eIUM Template Reference.

Page 134 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

Each namespace can define its own NME types. All the names within each namespace must be
unique. However, the same names can be used in different namespaces without ambiguity. When
you have the same names in two different namespaces, you may have to qualify the name with the
namespace.

The following diagram shows the NME schema editor with the EricssonMessage namespace opened
up and displaying the NME record definitions in the EricssonMessage namespace. Notice that it
defines an NME record type named ACAType. The right-hand pane shows the NME attributes and
sub-records defined in the ACAType NME type.

The EricssonMessage namespace defines several NME types including ACAType, ACRType and
AccountingMessage. The right-hand pane of the schema editor shows the ACAType NME type. It
defines the following five NME records within it: cost_Information, final_Unit_Indication, granted_
Service_Unit, subscription_Id and vendor_Specific_Application_Id. Notice that the optional column
is checked for these, indicating that they may or may not be present in any given input record.

It also defines the following NME attributes: accounting_Interim_Interval as a 64-bit integer (type
long), accounting_Multi_Session_Id as a string, accounting_RADIUS_Session_Id as a byte array,
accounting_Record_Number as a 64-bit integer (type long), accounting_Record_Type as a 32-bit
integer (type int) and check_Balance_Result as a 32-bit integer (type int).

Additional NME attributes are visible if you scroll down in the schema editor window.

Page 135 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

NOTE: The NME schema editor displays entries in alphabetical order, not the order in which you
entered them. It displays all the sub-NME types first followed by all the NME attributes.

Name Spaces, Type Aliases and Structured NME Types


The elements shown in the NME schema editor that are used to define structured NMEs are:

Namespaces define a unique naming context. Namespaces contain one or more type aliases and
one or more NME types. In the schema editor, namespaces are marked with the red triangle icon:

Type Aliases define additional names for types.

NME types define NME records. NME types consist of the NME attributes and the record structure
of your structured NME record definitions. NME types are marked with the blue triangle icon:

NMEs that are part of another NME type are shown with the blue diamond icon for required types
and a gray diamond for optional types:

NME attributes that are primitive types such as int, boolean, string and so forth are shown with a
blue ball for required attributes or gray ball for optional attributes:

This topic series assumes you understand the above terms. See "Using Structured NMEs" (on page
118) for more information on these terms.

Page 136 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

Building a Structured NME Schema


This section shows how to build a structured NME schema for a relatively simple record. It shows you
all the main features of the NME schema editor so you can create and edit large and complex record
types. The following diagram shows a structured NME that consists of two separate records. The
top-level record has three NME attributes: CreateTime of type Time, NumRecord of type int and
FileName of type string. It also contains a reference named Data to a second-level NME. The second-
level NME contains eight NME attributes. Two of these NME attributes are of type Port, which is a
user-defined type alias. This section will show how to create the NME schema for this NME.

Adding a New Namespace and NME Type


The first step in creating a new NME schema is to decide on the namespace in which your NME
schema will belong. Namespaces should be used to logically divide NME record definitions so that
NME type names and NME attribute names do not collide with other names. This example will use
the namespace MySpace.

The second step is to name each NME record or sub-NME record. This example will name the top-
level NME record MyNME and the second level NME MyDataRecord.

Once deciding on the NME record names, you can begin entering the NME record types. Since the
first-level NME contains the second-level NME, create the second-level NME first so the first-level
NME can refer to it. In general, create the lowest level NME types first.

In the schema editor, select the menu Schema -> New Data Type. The following dialog appears.
Enter the name of the record and the namespace you want to create as shown below.

Use the same menu item (Schema -> New Data Type) to create the MyDataRecord entry. The
following shows the schema editor with the new name space and empty records.

Page 137 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

Adding NME Attributes


To add NME attributes, select MyDataRecord and use the menu item Schema -> New Attribute or
the Add button to add NME attributes to MyDataRecord. This adds a new row where you can type in
the name of your NME attribute, StartTime, and select the predefined type int, as shown below. You
can change the type from int to Time later using the Launchpad deployment editor or the loadconfig
command. Do not select the Array or Optional boxes.

Continue adding the remaining NME attributes to the MyDataRecord NME. When you have finished,
the following shows this record in the schema editor. You will notice that the type alias Port is not
yet defined. Use the type int for now. The type alias Port will be defined manually as described
below.

Page 138 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

The available predefined NME attribute types are listed in "Primitive NME Attribute Types" (on page
127). The available predefined type aliases are listed in "Predefined Type Aliases" (on page 127).

Now add attributes to MyNME by selecting MyNME in the left-hand pane and selecting the menu
item Schema -> New Attribute as before. Add the simple NME attributes CreateTime, NumRecord
and FileName as before. When you add the Data NME attribute, for the type, select the type
MySpace:MyDataRecord to make MyDataRecord a sub-NME record of MyNME. The resulting
MySpace name space and MyNME and MyDataRecord NME types are shown below in the schema
editor.

Page 139 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

Saving Your NME Types


Once you have created your NME types in the schema editor, save them in the configuration with the
menu item File -> Save to Deployment. This writes your NME types to the configuration server under
the following configuration node.
[/SNMESchema/<name space>]

You can also use the File -> Save As menu item to save the entire structured NME schema to a file.

The following shows the output from using the saveconfig command or the Launchpad menu item
File -> Export Configuration and saving the node /SNMESchema/MySpace to a file. Note that some of
the types have been changed manually using the Launchpad Deployment Editor.

[/SNMESchema/MySpace]

[/SNMESchema/MySpace/MyDataRecord]
Attributes=DstIP,IPAddress
Attributes=DstPort,int
Attributes=EndTime,Time
Attributes=NumBytes,long
Attributes=NumPackets,long
Attributes=SrcIP,IPAddress
Attributes=SrcPort,int
Attributes=StartTime,Time

[/SNMESchema/MySpace/MyNME]
Attributes=CreateTime,Time
Attributes=Data,MySpace:MyDataRecord
Attributes=FileName,string
Attributes=NumRecord,int

Adding Type Aliases


You can define type aliases in your NME schema with the TypeAliases attribute. In the original
structured NME, the NME attributes SrcPort and DstPort are shown as type Port. Port is simply a
convenient alias for the type int. To define the type alias Port, use the TypeAliases attribute at the
name space node, as shown in the following example.

[/SNMESchema/MySpace]
TypeAliases=Port,int

[/SNMESchema/MySpace/MyDataRecord]
Attributes=DstIP,IPAddress
Attributes=DstPort,Port
Attributes=EndTime,Time
Attributes=NumBytes,long
Attributes=NumPackets,long
Attributes=SrcIP,IPAddress
Attributes=SrcPort,Port
Attributes=StartTime,Time

[/SNMESchema/MySpace/MyNME]
Attributes=CreateTime,Time

Page 140 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 10: Using the Structured NME Schema Editor

Attributes=Data,MySpace:MyDataRecord
Attributes=FileName,string
Attributes=NumRecord,int

NOTE: The schema editor in IUM version 4.5 Feature Pack 4 and 5 does not support TypeAliases. If
you edit any NME schema that has TypeAliases and save the NME schema, the TypeAliases will be
removed. Add your type aliases manually to your configurations. The schema editor will support
TypeAliases in a future release.

Adding Arrays and Optional Attributes


You can make any NME attribute, including sub-NMEs, into arrays or make them optional. In the NME
schema editor, simply select the Array check box or the Optional check box. The resulting elements
will be marked as arrays or optional, respectively. The following shows a structured NME with array
attributes and optional attributes. For more information, see "Defining Structured NME Types" (on
page 123).

[/SNMESchema/MySpace]
TypeAliases=Port,int

[/SNMESchema/MySpace/MyDataRecord]
Attributes=DstIP,IPAddress[]
Attributes=DstPort,Port,optional
Attributes=EndTime,Time
Attributes=NumBytes,long
Attributes=NumPackets,long
Attributes=SrcIP,IPAddress[]
Attributes=SrcPort,Port,optional
Attributes=StartTime,Time

[/SNMESchema/MySpace/MyNME]
Attributes=CreateTime,Time
Attributes=Data,MySpace:MyDataRecord
Attributes=FileName,string[],optional
Attributes=NumRecord,int

Page 141 of 332 HP eIUM (8.0)


Chapter 11

Using the Common Codec Framework: Structuring and


Transforming your Data
The following series of topics describes using the Common Codec Framework (CCF) in eIUM:

Overview 143

The IUM Studio 143

Repository Integration 144

CCF Components 145

CCF Commands 145

CCF: The Codec Layer 146

Language Overview 146

Using xfdtool to Convert External Schemas 166

Format Definitions in IUMStudio 166

Using the XFD-to-XSD Wizard 190

CCF: The Transformations Layer 195

Simple Transformations 196

Integrating Custom Transformations 196

Transformations Integration with the Codec Layer 196

Transformations and Business Rules: The TransformationRule 197

Language Overview 199

Integration with the Schema Layer (XSD) 224

Integration with Format Definitions (XFD) 224

Transformation Definitions in IUMStudio 225

Composite Transformations 240

CCF: The Schema Layer 249

Language Overview 249

Schema Definitions in IUM Studio 266

CCF Components 272

Usage Scenarios 273

Page 142 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Overview
The Common Codec Framework is a system for structuring and transforming your data, and is
comprised of the following main aspects:

l The Codec Layer, which is responsible for encoding/decoding data. The data format is defined in
terms of a special Format Definition Language, or XFD, which is an XML-based language. Format
definitions are stored in *.xfd files. The Codec Layer can parse raw binary, text, or XMLinput data
streams for decoding, or format the same for encoding purposes. See "CCF:The Codec Layer" (on
page 146) for more information on the Codec Layer, and "XFD Syntax Reference" (on page 157)
for the Format Definition language.
l The Transformations Layer, which is responsible for mappings and transformation of input data,
using TX files (Transformation Definition files: *.tx, a custom eIUMXMLfile format that contains
the Transformations descriptors). The Transformations Layer can be thought of as the "glue"
that bridges the format definition (XFD), and the schema definition (XSD), in either direction (for
both decoding and encoding). See "CCF: The Transformations Layer" (on page 195) for more
information on the Transformations Layer, and "Language Overview" (on page 199) for the
Transformation Definition language.
l The Schema Layer, which is responsible for producing the NMEschema, using XSD files (XML
Schema Description: *.xsd). XSD is an XML format for describing the structure of XML documents.
For CCF,the XSDfiles describe the NMESchema used by eIUMand its components, in a similar
manner that other eIUMtools are used for (such as the SNMESchema Editor).

The IUM Studio


IUM Studio is a dedicated Eclipse-based graphical development environment for working with the
three main layers in CCF, with corresponding views for each:

l Format Definitions designer for XFD files


l Transformations designer for TX files
l Schema designer for XSD files
The IUM Studio application allows you to open and edit the files associated with each of these views,
as well as any source data files, or text or XMLfiles. The application also provides a source-code
view for all file types, in addition to the visual method for working with the Format Definitions,
Transformations, and Schemas. IUM Studio can also validate your files to make sure they are
correct before check-in, and also includes several aids for working with all file types (such as syntax
highlighting, code completion, and structure tree view).

Files you work on in IUMStudio are organized into a perspective, which is a union of different views
and editors arranged in the main applications window (referred to as the Data Modeling
perspective; see the toolbar Window -> Open Perspective menu). This main perspective consists of
the following main sub-panels (or views, accessed via the Window -> Show View menu), and a central
editor and design area:

l Checked out projectsThis is a file explorer that lists files and directories checked out from the
eIUM repository. See "Repository Integration" (on page 144) for more information.
l Connections viewThis view allows creating/deleting/managing of connections to the eIUM
repository server, and check in/out/update files to/from the repository.
l Problems viewContains information about various validation errors and warnings.

Page 143 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Outline viewA tree-based view of the active editor content (tags, nodes, and so on).
l The Source/Design editorThis is the main area for editing the XFD, TX, or XSD files in either XML
source code or visual design view. It is a multi-tabbed view where you can work with and have
multiple files open at once. XFD and TXfiles also possess a collapsible/expandable Palette menu
that allow you to perform specific actions related to creating or editing Format Definitions and
Codecs (XFD), or Transformations (TX).

IUM Studio also includes a wizard interface that supports automated Schema (XSD) and
Transformations (TX)generation based on a given XFD file. For more information on running the
wizard, see "Using the XFD-to-XSD Wizard" (on page 190).

Also see "Format Definitions in IUMStudio" (on page 166) for information on getting started with
IUMStudio and using the application for Codec Layer and Format Definition tasks. See
"Transformation Definitions in IUMStudio" (on page 225) regarding Transformations in IUM Studio.

Repository Integration
All the files you work with in IUMStudio can be saved and worked with individually or as part of a
project, and the application further integrates with the eIUMrepository server, where the Format
Definitions (XFD), Transformations Definitions (TX), and Schema Definitions (XSD) are uploaded and
referenced so other processes can share and use them. Files can be checked out from the
repository, worked with locally in the application, and then checked back in. See "Format Definitions
in IUMStudio" (on page 166) for information on the repository directory structure when getting
started with IUMStudio.

Page 144 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

CCF Components
Once the IUMStudio has been used to process these files, the CCFfiles can be used for configuration
of eIUM components. For example, a collector can be created using the CCFFileEncapsulator, and its
configuration can be updated to point to the Format Definition (XFD) and Transformations (TX) files
in the repository (see the FormatDefinitionFile and MapDefinitionFile configuration attributes of this
component). The CCFFileWriterFactory can be used with the FileJDBCDatastore to store structured
NMEs into binary or text files (also see the FormatDefinitionFile and MapDefinitionFile configuration
attributes of this component).

For a description of these components and their configurable attributes and usage examples, see
the eIUMComponent Reference. Also see "CCF Components" (on page 272) for additional CCF
component information.

CCF Commands
eIUM also includes several command utilities that integrate with CCF (see the eIUM Command
Reference for complete details):

l The xfdtool command utility is a conversion tool that allows you to both create XFD files from
XSD files (xsd2xfd), and to create Transformations and NME descriptors from XFD files
(xfd2nme).
l The config2xsd command can read an SNME configuration file, and then generate XSD schemas
for a corresponding Structured NME schema.
l The repositorymanager command utility allows command-line based interaction for checking
files in/out of the eIUM repository server. It supports connect and disconnect functions,
validation, undo checkout operations, the ability to print change differences of local checkout
files versus those in the repository, and the ability to save frequently-used properties for
working with the repository. repositorymanager also integrates with CCF by being able to
validate all CCF descriptor files, namely: XFD, TX, and XSD with Java bindings.
l The ccftool utility and included docgen command allow you to produce CCF descriptors and
transformers documentation in HTMLformat. Beans XSD and NME XSD descriptors, as well as
built-in Transformers are supported.
Using the NME Type: {OneOfCIBERRecord} as an example, some sample output would be the
following:

Page 145 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Meanwhile, Transformers output looks like the following:

CCF:The Codec Layer


The Codec Layer in CCF is responsible for decoding and encoding data represented in different
formats (binary, text, XML). Primarily, the codec can read data from files, being part of
CCFFileEncapsulator and convert them to an IUM data representation (NMEs - decoding), or vice
versa, being part of CCFFileWriter, use data stored in NMEs to produce output for external data
consumers (encoding). The Codec framework itself is not aware of internal IUM data
representations, however, it only decodes/encodes data according to the given format definition,
which corresponds to XFD (XMLFormat Definition Language). Meanwhile, the assigning of data to
NMEs (while decoding) or reading data from the NME (while encoding) is the role of the Transformer
aspect of CCF (see "CCF: The Transformations Layer" (on page 195) for more information on
Transformations).

During decoding, input data is interpreted according to the format definition. Each primitive type
field from the input data is read with respect to the given codec properties for the particular field
(encoding, size, big-/little-endian, signed/unsigned), and stored in a Java variable of the
corresponding type. Subsequently, the data is then passed to the Transformer. During the encoding,
value of each field is received from the Transformer, and encoded according to the format
definition and the particular field codec properties.

The next section describes the Format Definitions and corresponding (XFD) language that is used to
parse and determine the input data pattern.

Language Overview
Data formats are defined according to the XFD specification as a hierarchy of XFD types. One <type>
describes a piece of data for a particular format: text, raw binary, or XML. Each <type> definition
consists of a <codec> definition and <field> container. For example:
<type id="Foo" EOR="true">
<codec>
<raw/>
</codec>
<sequence>
<field id="f1" type="long" codec="UInt32"/>
<field id="f2" type "long" codec="UInt32"/>
</sequence>
</type>

In this example, the <codec> element defines the data format and different encoding/decoding
properties that are relevant to this format. It can contain one of the following nodes: text, raw, or

Page 146 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

xml. <codec> can appear under the <type> element, which in this case describes <codec> on the
<type> level by specifying the data type (text, raw binary, xml). In case of text data, also one of the
processing modes must be specified (size, delimiter, pattern, format). For text processing modes,
see "Format Definitions for Different Data Types" (on page 149)). The EOR (End of Record) attribute
defines the record boundary. Each type with EOR="true" corresponds to one NME. In the case of
decoding, there can be several types with EOR="true", and for each one the separate NME will be
produced. In case of encoding, the EOR attribute must be one per format, at the root level.

On a <field> level, <codec> provides all the information needed to decode/encode a given field.
Accordingly, a field codec can be embedded into the <field> definition like the following:
<field id="value" type="string">
<codec>
<raw type="string" charset="ASCII" size="10" />
</codec>
</field>

This can also be separated, referenced from the field definition by "id" (this second form is
preferred when one codec definition is used for different fields):
<field id="value" type="string" codec="StringLen10" />
<codec id="StringLen10">
<raw charset="ASCII" size="10" />
</codec>

The <field> container can be one of the following: <sequence>, <set>, or <choice>. In <sequence>,
fields follow exactly in the order defined by the format. In <set>, fields can follow in any order. <set>
and <choice> are only supported for those formats that naturally support field discrimination, such
as XML.

The <field> container includes fields, and its definition defines field types (whether primitive or
constructed, that is, another XFD type), field id, and the field codec definition (optional). The field
<codec> describes the decoding/encoding method for the particular field. For constructed fields, the
field codec is not relevant because all information needed for the decoding/encoding of such a field
is contained in the corresponding <type> definition.

The following basic (or primitive) data types are supported in XFD:

l byte
l byte[]
l char
l int
l long
l short
l string
All types correspond to Java language types with the same name.

Meanwhile, the <switch> element allows choosing the field type dynamically depending on the value
of a specified field, and allows constructing tag-value formats. For example:

Page 147 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<type id="OneCIBERRecord">
<codec>
<raw/>
</codec>
<sequence>
<field id="recordType" type="int" codec="UInt2" />
<switch field="recordType">
<case value="12337">
<field id="recordType01" type="CIBERRecord01" />
</case>
<case value="12338">
<field id="recordType02" type="CIBERRecord02" />
</case>
<default>
<field id="custom" type="CustomRecord" />
</default>
</switch>
</sequence>

Conversely, instead of individual field definitions, an <array> of fields can be used to describe a
sequence of the same fields with a given length (the length attribute is optional). If the array length
is not specified, the decoder attempts to read elements till the next boundary specified by the
format, or till the end of input. If no length specified, the encoder just takes the actual length of the
array from the input record. For example:
<array id="records" length="5">
<field type="OneCIBERRecord " />
</array>

In the above example, length="5", means it consists of 5 elements of the OneCIBERRecord type.

Most format definitions are also reversible, that is, they contain enough information for the decoder
to recognize fields in the input stream, and for the encoder to produce output from given field
values. Some format types, however, are not reversible and can be used either for decoding or
encoding only. For example, the two different modes of text formats: (1) based on regular
expression patterns (decoding); and (2) that which is based on formatting patterns (see "Text Data"
(on page 149) for more information on regex- and formatting-based patterns).

See "XFD Syntax Reference" (on page 157) for a complete reference description of the syntax and
all Format Definition language elements.

Transformations as Field Codecs


The alternative to using built-in field codecs is using transformations. This option might be useful
because transformations are more powerful and can be combined together. For more information
about transformations, see "CCF: The Transformations Layer" (on page 195). The following is an
example of using a transformation as a pluggable codec to decode a short value. The value is read
as a raw byte sequence, and then the transformation converts the byte arrays into short integers:
<field id="applicationData" type="long">
<codec>
<raw type="byte[]" size="16" usetransformation=" t:BCDtoLong">
</raw>

Page 148 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

</codec>
</field>

This example assumes that there is a transformer of type "BCDtoLong" that takes a BCD-encoded
byte array as input, and produces a long as output.

Format Definitions for Different Data Types


As mentioned previously, the Codec framework supports text, raw binary, XML and mixed data
processing as inputs and outputs. The following topics describe these formats.

Text Data

Text formats can be defined in terms of the following forms:

l Fixed length fields (both decoding/encoding)


l Delimited format (both decoding/encoding)
l Field boundaries defined by regular expression patterns (can only be used for decoding)
l Formats described by a formatting pattern (can only be used for encoding)
Text encoding is specified in the <codec> node of the root <type> (or the <text> element for mixed
formats; see "Mixed Data" (on page 156) for more information on mixed formats).

Fixed-Length Fields

Each field has a fixed length that is specified in the format definition. Based on length values, the
decoder determines field boundaries. The encoder uses the length from XFD to ensure that values
coming from the Transformer conform to the format specification. If padding exists, then the
padding character can be specified by the "padding" attribute. The following is an example fixed-
length text format definition (notice mode="size" at the <type> level):
<type id="Timezone">
<codec>
<text mode=size/>
</codec>
<sequence>
<field id="TZName" type="string">
<codec>
<text size="3"/>
</codec>
</field>
<field id="offset" type="string">
<codec>
<text size="6" padding= />
</codec>
</field>
</sequence>
</type>

If the input data example was:


UTC+0800

Page 149 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

then "UTC" would be parsed from the input data due to the first <field> element ("TZName" field of
type "string", size=3 characters, no padding), and the second <field> element ("offset" field of type
"string", size=6 characters, padding - white space).

Delimiter-Separated Fields

Fields in a <type> element can be delimited by delimiter characters on a per-field basis. An empty
delimiter means that the decoder reads till the next boundary. An example format definition is the
following (notice mode="delimiter" at the <type> level):
<type id="PersonalData">
<codec>
<text mode=delimiter/>
</codec>
<sequence>
<field id=" FirstName" type="string">
<codec>
<text delimiter=;/>
</codec>
<field>
<field id=" LastName" type="string"/>
<codec>
<text delimiter=/>
</codec>
<field>
</sequence>
</type>

For example, input data of:


Kyle;Sanders

would be parsed according to this format definition (string fields "FirstName" and "LastName"
delimited by a semi-colon).

Fields Defined by Regular Expression Patterns

This text format is defined in terms of Regular Expression (regex) patterns. The regular expression
pattern must be specified for each field, with the <text> element using the corresponding "pattern"
attribute. The pattern is applied in multi-line mode, so ^ matches after any line terminator, and $ -
before any line terminator, not just at the beginning and the end of the input. Each pattern must
contain one matching group (in parentheses).

Field patterns are applied to the input sequence one-by-one. For each field, the sequence that
matches the field pattern is excluded from the input sequence. The sequence that matches the
group is extracted and passed further to the formatter as a value of the corresponding field.

NOTE: This type of format definition can be used only for decoding, not for encoding.

The following example shows regular expressions being used while decoding of an Apache web
server log:
<type id="ApacheWebLogRecord">
<codec>

Page 150 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<text mode=pattern/>
</codec>
<sequence>
<field id="clientIp" type="string"> <!-- {24.192.15.132 - } -->
<codec>
<text pattern="(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}) " />
</codec>
</field>
<field id="authUser" type="string"> <!-- {- } -->
<codec>
<text pattern="- (\\w*) " />
</codec>
</field>
<field id="sysDate" type="string">
<codec>
<text pattern="\\[(.*)\\] " />
</codec>
</field>
<field id="usageInfo" type="UsageInfo">
<codec>
<text pattern="\&quot;(.*)\&quot; " />
</codec>
</field>
<field id="requestStatus" type="int">
<codec>
<text pattern="(\\d*) " /> <!-- {200 } -->
</codec>
</field>
<field id="responseSize" type="int">
<codec>
<text pattern="(\\d*)" /> <!-- {1700} -->
</codec>
</field>
</sequence>
</type>

For example, assuming the following input data:


24.192.15.148 - kyle [14/Nov/1998:16:05:00 +0800] "GET
/gif/netsoft.gif HTTP/1.1" 304 100

the clientIp field would have the "(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}) "


pattern applied.

First, the sequence that matches the entire pattern is extracted from the input sequence
(highlighted yellow and red). Next, the sequence that matches the group is extracted and passed
subsequently to the formatter as a value of the clientIp field (highlighted in yellow). Only one
groupthat is, one pair of curly bracketscan be present in the pattern.
24.192.15.148 - kyle [14/Nov/1998:16:05:00 +0800] "GET
/gif/netsoft.gif HTTP/1.1" 304 100

Page 151 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The rest of the fields are processed in the same manner. The "authUser" field has the following
pattern: "- (\\w*) "
- kyle [14/Nov/1998:16:05:00 +0800] "GET /gif/netsoft.gif HTTP/1.1"
304 100

The "sysDate" field's pattern is : "\\[(.*)\\] "


[14/Nov/1998:16:05:00 +0800] "GET /gif/netsoft.gif HTTP/1.1" 304 100

The "usageInfo" field's pattern is: "\&quot;(.*)\&quot; "


"GET /gif/netsoft.gif HTTP/1.1" 304 100

The "requestStatus" field pattern is: "(\\d*) "


304 100

Lastly, the "responseSize" field pattern: "(\\d*)"


100

Field Formatting Patterns

The text format is defined as a formatting pattern which contains placeholders for fields as field
names in curly brackets. The codec framework also supports all formatting options supported by
the java.util.Formatter class.

NOTE: This type of format definition can only be used for encoding.

For example:
<type id="DateAndTime">
<codec>
<text mode=format format="[{month}/{day}/{year}:{time}
{timeZone}]"/>
</codec>
<sequence>
<field id="month" type="int"/>
<field id="day" type="int"/>
<field id="year" type="int"/>
<field id="time" type="string"/>
<field id="timeZone" type="string"/>
</sequence>
</type>

Given this formatting type definition, the output would look like the following:
[14/Nov/1998:16:06:45 +0800]

Raw Binary Data

The codec supports parsing of size-based binary formats, where fields can be extracted from the
input sequence based on size. The allowed size units are: Byte, Kilobyte, and Megabyte. For example:
<type id="FIELD">
<codec>

Page 152 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<raw/>
</codec>
<sequence>
<field id="version" type="int" codec="UInt1" />
<field id="id" type="int" codec="UInt1" />
<field id="value" type="string" codec="StringLen10" />
</sequence>
</type>

<codec id="UInt1">
<raw type="int" size="1" unit="Byte" order="big-endian"
unsigned="true" />
</codec>

<codec id="StringUTF8Len10">
<raw type="string" charset="UTF8" size="10" />
</codec>

The decoded input sequence (in hexadecimal form) is the following:

E2 0A 01 02 03 04 05 06 07 08 09 00

Octets and corresponding field definitions are highlighted with the same color. Size can also be
defined as an arithmetical expression that contains values of other fields:
<sequence>
<field id="len" type="int" codec="UInt2" />
<field id="value" type="string">
<codec>
<raw charset=ASCII size=len - 2>
</codec>
</field>
</sequence>

Padding

If padding exists, the padding method can be specified using the "padding" attribute (the only
supported padding method is zero-fill):
<raw type="string" charset="ASCII" size="length - 2" padding=zero-
fill/>

XML Data

The <xml> sub-element of <codec> at the type level can be used to specify that the data is XML-
encoded. In the format definition, every non-leaf XML node is represented by a separate <type>.
Leaf nodes meanwhile are represented as primitive fields. The "path" attribute specifies either the
node name or node name and attribute name, separated by an @ character. The following element
types can be used to define the XML format:

l <set> describes an unordered set of fields (corresponds to xsd:all)


l <sequence> describes an ordered set of fields (corresponds to xsd:sequence)

Page 153 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l <choice> describes a selection from a set of fields (corresponds to xsd:choice)


The following is a sample XMLdocument representing input data, followed by the corresponding XFD
definition document to parse such XML input:
<?xml version="1.0" encoding="UTF-8"?>
<phoneBook>
<owner type="pastor">Schlag</owner>
<contacts>
<personInfo>
<firstName>Alex</firstName>
<lastName>UNKNOWN</lastName>
<phoneNumber>1234567</phoneNumber>
</personInfo>
<personInfo>
<firstName>Ustas</firstName>
<lastName>UNKNOWN</lastName>
<phoneNumber>7654321</phoneNumber>
</personInfo>
<personInfo>
<firstName>Stirlitz</firstName>
<lastName>UNKNOWN</lastName>
<phoneNumber>123456</phoneNumber>
</personInfo>
</contacts>
<description>my contacts</description>
</phoneBook>
</xml>

XFD format definition document:


<root type="Root" />

<type id="Root">
<codec>
<xml />
</codec>
<sequence>
<field id="phoneBook" type="PhoneBook">
<codec>
<xml path="phonebook" />
</codec>
</field>
</sequence>
</type>

<type id="PhoneBook" EOR="true">


<codec>
<xml />
</codec>
<sequence>
<field id="version" type="int">

Page 154 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<codec>
<xml path="@version" />
</codec>
</field>
<field id="ownerType" type="string">
<codec>
<xml path="owner@type" />
</codec>
</field>
<field id="description" type="string">
<codec>
<xml path="@description" />
</codec>
</field>
<field id="owner" type="string">
<codec>
<xml path="owner" />
</codec>
</field>
<field id="contacts" type="Contacts">
<codec>
<xml path="contacts" />
</codec>
</field>
</sequence>
</type>

<type id="Contacts">
<codec>
<xml />
</codec>
<sequence>
<array id="persons">
<field type="PersonInfo">
<codec>
<xml path="personInfo" />
</codec>
</field>
</array>
</sequence>
</type>

<type id="PersonInfo">
<codec>
<xml />
</codec>
<sequence>
<field id="firstName" type="string">
<codec>
<xml path="firstName" />

Page 155 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

</codec>
</field>
<field id="lastName" type="string">
<codec>
<xml path="lastName" />
</codec>
</field>
<field id="phoneNumber" type="long">
<codec>
<xml path="phoneNumber" />
</codec>
</field>
</sequence>
</type>

</root>

Mixed Data

Data can be mixed in the following manner:

l Text inside raw binary data


l Text inside XML, with escape sequences as defined by the XML specification
l XML inside raw binary data
l XML inside plain text
The following snippet is an example of a mixed-format <type> definition. The XML data of <type>
"PhoneBook" contains an element "owner", which is semicolon-delimited text described by <type>
Owner:
<type id="PhoneBook">
<codec>
<xml />
</codec>
<set>
<field id="owner" type="Owner">
<codec>
<xml path="owner" />
</codec>
</field>
<field id="contacts" type="Contacts">
<codec>
<xml path="contacts" />
</codec>
</field>
</set>
</type>

<type id="Owner">
<codec>

Page 156 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<text mode="delimiter" />


</codec>
<sequence>
<field id="name1" type="string">
<codec>
<text delimiter=";" />
</codec>
</field>
<field id="name2" type="string">
<codec>
<text delimiter="" />
</codec>
</field>
</sequence>
</type>

XFD Syntax Reference


The Format Definition language (XFD) descriptors specify the decoding/encoding formats for
different types of data: raw-binary, text, and XML. The actual descriptors are XML documents with
elements from the "http://www.hp.com/usage/datastruct/xfd" XML namespace.

<format>

The <format> element is the root of the Format Definition XML descriptor, and encloses elements
with decoding/encoding information. The structure of decoded/encoded data is specified by means
of <type> and <field> elements.

Syntax
<format xmlns="http://www.hp.com/usage/datastruct/xfd"
targetNamespace=xs:string >
(root, type*, codec*)
</format>

Attributes

Attribute Name Description

targetNamespace Specifies the unique namespace of the defined format. Used to refer to type
structures from this format.

Elements

Element Name Description

<root> Specifies the root type.

<type> Specifies decoded/encoded data structure as a type.

<codec> Specifies decoding/encoding format for a single data value.

Page 157 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<root>

The <root> element specifies the root node of the format tree. All other types are the subtypes
(direct or indirect) of the root type. Decoding/encoding starts from this type.

Syntax
<root type= xs:string />

Attributes

Attribute Name Description

type Contains the name of the root type.

<type>

The <type> elements are the main elements of a Format Definition descriptor. One <type> element
represents one data structure.

Syntax
<type type id=xs:string EOR?=true>
(codec, (sequence | choice | set))
</type>

Attributes

Attribute
Name Description

id Specifies the id of the type.

EOR End of Record marker. Optional, possible value: true. Indicates the NME boundary.
During decoding, for each type with EOR=true, the separate NME will be produced. At
least one type should have EOR=true, otherwise no NME will be produced. In case of
encoding, the NME always maps to the root type, so any EOR attributes in non-root
types are ignored.

Elements

Element Name Description

<codec> Holds information that describes encoding/decoding method of the type.

<sequence> Sequence of fields.

<choice> Choice of fields.

<set> Set of fields.

Page 158 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<codec>

The <codec> element specifies decoding/encoding information for type and field elements which
enclose it.

Syntax
<codec id= xs:string>
(raw | text | xml)
</codec>

Attributes

Attribute Name Description

id Specifies the unique ID of the codec.

Elements

Element Name Description

<raw> Indicates raw binary data.

<text> Indicates text data.

<xml> Indicates XML data.

<sequence>

Represents a <sequence> of elements. Unlike in <set>, the order of elements matters.

Syntax
<sequence id= xs:string>
(field*, switch*, array*)
</sequence>

Elements

Element Name Description

<field> Field (see <field>).

<switch> Switch of fields (see "<switch>").

<array> Array of fields (see "<array>").

<choice>

The <choice> construct represents a choice between several fields. This can be used in formats
where fields are distinguished by special tags, like XML. <choice> has several fields inside, but only
one of them will actually be present in the input or output.

Syntax

Page 159 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<choice>
(field*)
</choice>

Elements

Element Name Description

<field> Specifies the field as one possible alternative.

<set>

Represents a <set> of fields. In a set, unlike in <sequence>, elements can be in any order. Therefore
this construct makes sense only in formats where fields are distinguished by special tags, like XML.

Syntax
<set>
(field*, switch*, array*)
</set>

Elements

Element Name Description

<field> Field (see "<field>").

<switch> Switch of fields (see "<switch>").

<array> Array of fields (see "<array>").

<array>

Represents array of elements of the same type.

Syntax
<array if= xs:string
length= xs:string >
(field)
</array>

Attributes

Attribute
Name Description

id Id of the array.

length Specifies the length of array, but is optional. If not specified, the decoder attempts to
decode elements till the end of the input, and the encoder encodes all elements that
are present in the input record.

Elements

Page 160 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Element Name Description

<field> Field (see "<field>").

<field>

The <field> element represents a single value from structured data. The field can be of structured
and primitive type.

Syntax
<field id= xs:string
type= xs:string
codecId?= xs:string >
(codec?)
</field>

Attributes

Attribute
Name Description

id Specifies the name of the field.

type Specifies the type of the field.

codecId Specifies the codecId of the field. Codec can also be defined in-line, as a <codec>
element.

NOTE: Codec is mandatory for fields of primitive type.

Elements

Element
Name Description

codec Field codec. Has the same function as codecId above, but here the codec is entirely
defined inside the <field> element.

<switch>

The switch element specifies the format when decoding/encoding depends on a certain value of
another field.

Syntax
< switch field= xs:string >
(case value= xs:string (field))*
</ switch >

Attributes

Page 161 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attribute
Name Description

field Specifies the id of field that is used as a key field for the switch, that is, based on its
value, one or another field is decoded/encoded.

Elements

Element
Name Description

<case> Specifies the enclosed field to be decoded/encoded in correspondence with the


value of the choice field.

<raw>

The raw element specifies the properties of the raw binary codec.

Syntax
<raw type= xs:string
size= xs:int
unit= xs:string (byte | KByte | MByte)
order= xs:string (big-endian | little-endian)
unsigned= xs:Boolean (true | false)
charset= xs:string
padding= xs:string >
(use-transformer?)
</raw>

Attributes

Attribute Name Description

type One of the primitive types supported by the language.

size Size of the field.

unit Size units. Default is bytes.

order Specifies decoding/encoding bytes order. Default is little-endian.

unsigned Signed/unsigned. Default is signed.

charset Charset (applicable when type is string).

padding Padding method. Currently the zero-fill padding method is implemented.

Elements

Element Name Description

<use-transformer> Specifies transformer to be used as a custom codec.

Page 162 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<text>

The text element specifies properties of the text codec. At the type level, this element specifies a
decoding/encoding mode. At the field level, it specifies codec properties.

Syntax

On type level:
<text mode= xs:string (pattern | delimiter | size | format) />

On field level:
<text pattern= xs:string | delimiter=xs:string | size= xs:string |
format= xs:string />

Attribute
Name Description

mode Decoding/encoding mode. Possible values:

l pattern Text format is defined based on regex patterns. Can be used for
decoding only.
l delimiter Format defined based on delimiters between fields.
Encoding/decoding.
l size Format defined based on fields size. Encoding/decoding.
l format Used only for encoding. Fields are encoded based on formatting
string.

pattern Regex pattern

delimiter Fields delimiter

size Field size

format Formatting string

Elements

Element Name Description

<use- Specifies the transformer to be used as a custom codec for


transformer> decoding/encoding text value.

Format Definition Example

Assuming input data with the following structure in some binary format:

Sample Format Description


[Header] Type

File Type int

Page 163 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Data format type int

Records Source string

File Sequence Number long

File Creation Type byte

[Records] Array of records with the following structure

Record Type Int, defines type of the record

[FixedLengthCDR] recordType=2

Original Unified Caller ID String

Destination Unified Caller ID String

Audio Codec Int

Termination Reason Int

Connection time Long

[DomesticCDR] recordType=3

Original Caller ID String

Connection time Long

Mobile Call String

The corresponding XFD format definition would be the following:


<?xml version="1.0" encoding="utf-8"?>
<format xmlns="http://www.hp.com/usage/datastruct/xfd"
targetNamespace="http://www.hp.com/usage/datastruct/xfd/cdrsample">
<root type="CDRFile" />

<type id="CDRFile">
<codec> <raw /> </codec>
<sequence>
<field id="header" type="Header" />
<array id="cdrs">
<field type="CDR" />
</array>
</sequence>
</type>

<type id="Header">
<codec> <raw /> </codec>
<sequence>
<field id="fileType" type="int" codec="UInt2" />
<field id="dataFormatType" type="int" codec="UInt2" />
<field id="recordsSource" type="string" codec="StringUTF6Len" />

Page 164 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<field id="fileSeqNum" type="long" codec="ULong4" />


<field id="fileCreationType" type="int" codec="UInt2" />
</sequence>
</type>

<type id="CDR">
<codec> <raw size="2048" unit="Byte" /> </codec>
<sequence>
<field id="recordType" type="int" codec="UInt2" />
<choice field="recordType">
<case value="2">
<field id="fixedLengthCDR" type="FixedLengthCDR" />
</case>
<case value="3">
<field id="domesticCDR" type="DomesticCDR" />
</case>
</choice>
</sequence>
</type>

<type id="FixedLengthCDR">
<codec> <raw /> </codec>
<sequence>
<field id="originalCallerId" type="string" codec="StringUTF8Len11"
/>
<field id="destinationCallerId" type="string"
codec="StringUTF8Len11" />
<field id="audioCodec" type="string" codec="StringUTF8Len6" />
<field id="terminationReason" type="string" codec="StringUTF8Len6"
/>
<field id="connectionTime" type="long" codec="ULong4" />
</sequence>
</type>

<type id="DomesticCDR">
<codec> <raw /> </codec>
<sequence>
<field id="originalCallerId" type="string" codec="StringUTF8Len6" />
<field id="connectionTime" type="long" codec="StringUTF8Len3" />
<field id="mobile" type="string" codec="UInt2" />
</sequence>
</type>

<!-- Codec part -->


<codec id="UInt2">
<raw type="int" size="2" unit="Byte" order="big-endian"
unsigned="true" />
</codec>
<codec id="ULong4">
<raw type="long" size="4" unit="Byte" order="big-endian"

Page 165 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

unsigned="true" />
</codec>
<codec id="StringUTF8Len3">
<raw type="string" charset="UTF8" size="3" />
</codec>
<codec id="StringUTF8Len6">
<raw type="string" charset="UTF8" size="6" />
</codec>
<codec id="StringUTF8Len11">
<raw type="string" charset="UTF8" size="11" />
</codec>
</format>

See "Transformations Syntax Reference (TX)" (on page 199) for the corresponding transformation
definition.

Using xfdtool to Convert External Schemas


Some data formats supported by CCFnamely XMLare standard data formats that have their
own standard notation, such as the XMLSchema Definition Language (XSD). For this format,
eIUMincludes the xfdtool command utility, which you can use to produce format definitions,
transformations, and schemas from native format definitions.

NOTE: If you need to convert an SNME configuration to an XSD schema, you can use the
config2xsd command. The command reads an SNME configuration file and generates XSD
schemas for a corresponding Structured NME schema. See the eIUMCommand Reference for
details.

xfdtool has two main functions:

l The xsd2xfd command, which allows you to generate XFD format files from schema definitions
inside of <type> elements enclosed by a given WSDL document. It can also generate XFD format
files from a given XML Schema definition (XSD) document.
l The xfd2nme command, which allows creating Transformations and NME descriptors from a
given XFD file.
For more information on xfdtool's options and examples, see the eIUMCommand Reference.

Format Definitions in IUMStudio


The IUMStudio application allows Format Definitions (XFD files, whether parsers or formatters) to
be created, edited, and visualized, whether from the source code or graphically. Format Definitions
can be expressed as <type> definitions and <codec> definitions, though codec definitions are an
integral part of the Format Definition. Codecs can be referenced in-line into <type> definitions, or
can exist as a separate <codec> elements in the Format Definition. In such a case, the codec is
referenced from the Format Definition by its "id" and can be used in several places in the format
(for more information, see the "XFD Syntax Reference" (on page 157)).

Starting IUMStudio
To start the IUM Studio, go to %BINROOT%/apps/IUMStudio to launch the IUMStudio executable
(IUMStudio.exe on Windows, or IUMStudio on UNIX). The application is displayed as below.

Page 166 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

After launching the main interface, you can connect to the repository server by first left-clicking its
"RepositoryServer" label in the Connections pane.

This enables the connect button that you can click to connect to the repository server.

Page 167 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Click the connect button to establish the repository server connection (a progress indicator appears
to indicate this).

Page 168 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Once you are connected, you can then click the Checkout button in the Connections pane to begin
checking out files.

You are then prompted for the checkout destination directory. This is the local working directory on
your system where you will work with the XFD, TX, and XSD files in IUMStudio. By default this is
%VARROOT%/apps/IUMStudio/RepositoryServer.

After specifying the checkout directory, click Ok. The directory is then displayed in the Checked out
projects pane.

Page 169 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The eIUM deployment presumes and uses a certain directory structure under the root
/RepositoryServer directory for CCF. This is needed so you can set up your IUM Studio work
environment for the XFD, TX, and XSD files used by CCF. The directory structure is the following:
/datamodel
/beans
/format
/nme
/transform

The /beans and /nme directories are used by the Schema layer, namely for the XSD files. The
/format directory is used by the Codec and Format Definition layer (XFDand source data files),
while /transform is utilized by the Transformation layer (TX files). You can populate this directory
structure directly via the file system of the host running the IUMStudio (under
%VARROOT%/apps/IUMStudio/RepositoryServer), or you can use IUM Studio's built-in tools for
creating these directories. To do this, select the "datamodel" directory from the Checked out
projects pane, and either right-click and select New > Folder, or File > New > Folder. The New Folder
dialog is displayed, where you can create the necessary directory structure from within the
application.

NOTE: The directories under /datamodel are required and should be named accordingly.
Initially, as shown above, the repository is empty. If not named according to the required format,
files placed in a non-standard directory can fail validation and you will not be able to check in
modified files to the repository.

Page 170 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Now that all the required directories are populated, you can begin using the IUMStudio for your
CCFtasks. At this point you can check in your changes or begin working with your files and check in
your changes later. The following figure displays the starting view after all directories are created
(but no files are opened yet).

Page 171 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

About XFDFile View


When viewing a Format Definition (XFD) file, the Studio includes a multi-page editor with three tab
views: Source, Types and Codecs (which correspond to the sub-tabs you can click to open each view
when you have an XFD file open in the Studio). The Source tab of an XFDfile provides XML source
editing capabilities with syntax highlighting, code completion, formatting, validation, and code
assist. The Types tab includes graphical controls for creating (via the Palette tab)and editing <type>
elements. Lastly, the Codecs tab provides graphical controls for creating and editing named <codec>
elements. Changes you apply on any of the tabs are synchronized and reflected across the other
tabs.

Source Tab
The Source tab allows you to edit the XFD file in text-only view where you can edit the XML source
directly.

Page 172 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

When editing, you can also roll your mouse over an attribute for example to get a pop-up hint for
the attribute's possible values and its usage.

Page 173 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

While typing to add new elements or attributes, the code assist will appear and prompt you with
available options, as shown in the following figure.

Page 174 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Notice that in the above figure, the source is validated as you type, with syntax errors indicated in
the vertical scroll area and warning indicators on the right. See the "XFD Syntax Reference" (on page
157) for more information on the format definition language and its configurable elements and
attributes that appear on the Source tab.

Types Tab
The Types tab consists of three parts: header, body, and Palette tool bar. The Palette contains the
following controls (which are context-sensitive based on the insertion point for adding, editing, or
deleting of elements):

Page 175 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Collapse/expand allCollapse or expand all type boxes by clicking the -/+ button.

l Add importClick the Add import button to add a new <import> element.
l DeleteSelect an element in a <type> box or <type> box itself, or an <import>, and click the
delete button.

Page 176 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l New TypeCreate a new <type> box and generate a <type id="NewType" /> element in the
source.

Page 177 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Add sequenceAdd a <sequence> element into the selected <type> and select a visual element
from the Palette representing the <sequence>. This Palette control is enabled only if an empty
type box is selected. In other words, the empty type box must be selected first, then, from the
Palette bar, select Add sequence.

Page 178 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Page 179 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Add setAdd a <set> element into the selected <type> and select a visual element from the
Palette representing this <set>. This Palette control is enabled only if an empty <type> box is
selected and then clicking Add set, in the same manner that a <sequence> element is added
above.
l Add arrayAdd an <array> element into the selected <sequence> and select a visual element
representing this <array>, by clicking Add array.
l Add fieldAdd a <field> element into the selected <sequence> or <set>. This control is enabled
only if either a <sequence> or a <set> element is selected, and then clicking Add field. The
figures below show a <field> element being added into a <set>.

Page 180 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Add choiceAdd a <choice> element into the selected <sequence> and select a visual element
representing this <choice>. This control is enabled when a <sequence> is selected, and then
clicking Add choice.

Page 181 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The following figures provide an orientation to the primary interface aspects and controls you can
work with on the Types tab.

Page 182 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The corresponding source view for this <type> element is the following:
<type id="CIBERRecord01">
<codec>
<raw size="198" unit="Byte" />
</codec>
<sequence>
<field id="BATCH_CREATION_DATE" type="string" codec="StringUTF8Len6"
/>
<field id="BATCH_SEQNO" type="string" codec="StringUTF8Len3" />
<field id="SENDING_SID" type="string" codec="StringUTF8Len5" />
<field id="RECEIVING_SID" type="string" codec="StringUTF8Len5" />
<field id="REC_REL_NO" type="string" codec="StringUTF8Len2" />
<field id="ORIG_RETURN_IND" type="string" codec="StringUTF8Len1" />
<field id="CURRENCY_TYPE" type="string" codec="StringUTF8Len2" />
<field id="SETTLEMENT_PERIOD" type="string" codec="StringUTF8Len6"

Page 183 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

/>
<field id="CLEARINGHOUSE_ID" type="string" codec="StringUTF8Len1" />
<field id="BATCH_REJECT_REASON_CODE" type="string"
codec="StringUTF8Len2" />
<field id="BATCH_CONTENT" type="string" codec="StringUTF8Len1" />
<field id="LOCAL_CARRIER_RESERVED_FILLER" type="string"
codec="StringUTF8Len20" />
<field id="SYSTEM_RESERVED_FILLER" type="string"
codec="StringUTF8Len144" />
</sequence>
</type>

The corresponding source views for the first and last <type> elements from this figure are the
following:
<type id="CDRFile" EOR="true">
<codec> <raw /> </codec>
<sequence>
<field id="header" type="Header" codec="UInt1" />
<array id="cdrs" length="5">
<field type="CDR" />

Page 184 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

</array>
</sequence>
</type>
...
<type id="CDR">
<codec> <raw/> </codec>
<sequence>
<field id="recordType" type="int" codec="UInt2" />
<switch field="recordType">
<case value="2">
<field id="fixedLengthCDR" type="FixedLengthCDR" />
</case>
<case value="3">
<field id="domesticCDR" type="DomesticCDR" />
</case>
</switch>
</sequence>
</type>

The header at the top of the file in Type view displays an editable text field with the format
identifier (namespace). You can click this field to change the value of the <format> element's
"targetNamespace" attribute.

The Types tab contents represents all the <type> elements in the XFDfile as a grid of collapsible and
expandable boxes (one box = one <type> element). From here you can click the box header, which
corresponds to the <type> "id" attribute that you can edit. Click to select the header (id), then click
again to make the text field available for editing.

Page 185 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The <type> box also contains each <type>s <field> elements, which you can edit in a similar manner,
as well as controls for editing inline codecs and marking the type as root. Certain fields have a drop-
down selection lists where you can choose different attribute values. To edit a field, first left-click it
to make it available for editing, as shown below.

Page 186 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

For example, you can edit the <field> "id" attribute by entering it in the text box, select a new value
for the "type" or "codec" attributes (shown above), or add a "condition" attribute and value by
clicking the (?) icon. You can also click the gear icon to select the codec input data format by
selecting the corresponding "raw", "text", or "xml" options from the drop-down list. The following is
an example of the "type" attribute selections that can be chosen:

NOTE: You can also right-click any of the elements from the tree in the Outline pane to edit
elements and their attributes or values. As you work with a particular <type> element in the Type
tab, it will also be highlighted for you in the Source tab.

The <switch> element can also be collapsed and expanded in Type view. The following figure shows a
"CDR" <type> with the <switch> "recordType" currently collapsed.

Page 187 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Click the switch icon to expand the nested element view, as shown below.

Page 188 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

See the "XFD Syntax Reference" (on page 157) for more information on the format definition
language and its configurable elements and attributes that appear on the Types tab.

Codecs Tab
The Codecs tab is very similar to the Types tab, with the main view consisting of a box grid that
represents the named codecs (the top-level <codec> elements with the "id" attribute). Much like the
Type tab for <type> elements, the box consists of an editable header (<format> element can be
edited just as in Types tab), with the codec ID and an editable list of the codecs properties. The
<codec>header contains a drop-down list of possible codec types (raw, text, XML). Changing the
value of the codec results in changing the list of codec properties and generation of corresponding
elements in the source (<raw>, <text>, <xml>). Furthermore, the Codecs tab contains a Palette bar
for context-specific actions, much like the Types tab.

On the Codecs tab you can perform the following functions with the corresponding Palette controls,
in much the same manner as that described for the Types tab:

Page 189 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Collapse/expand allCollapse or expand all <codec> boxes by clicking the -/+ button.
l Add importClick the Add import button to add a new <import> element.
l DeleteSelect an element in a <codec> box or <codec> box itself, or an <import>, and click the
delete button.
l New codecCreate a new <codec> element and add a new codec box into the main Codec tab
body (by clicking the New codec button in the Palette bar).
The following figure summarizes these options and provides an orientation to the primary interface
controls you can work with on the Codecs tab.

NOTE: You can also modify the actual <codec> element from the Types tab by clicking the gear
icon in the upper left portion of the Type box.

See the "XFD Syntax Reference" (on page 157) for more information on the format definition
language and its configurable elements and attributes that appear on the Codecs tab.

Using the XFD-to-XSD Wizard


As mentioned previously, IUM Studio includes a wizard interface that you can use to automate
generating Schema (XSD) and Transformations files (TX) based on a given XFD file. For example, the
wizard allows you to:

l Choose an input XFD file(s).


l Enter a path to the output directory for generated XSD files.
l Enter a path to the output directory for generated transformation files.

Page 190 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l After running the wizard, it generates three files for each input XFD file: the XSD Schema, a
transformation (TX)file with automated mappings between existing XFD and XSD files (XFD ->
XSD), and a reverse transformation file (XSD -> XFD).
To launch this wizard, first right-click a given XFD file in the Checked out projects pane, and select
New -> Convert XFD to XSD.

Next, you can accept the file you originally selected or select additional files from the dialog.

Page 191 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

After selecting files, you can specify the output directory for the generated XSD file (by default, the
assumed /nme directory is selected).

Page 192 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Lastly, specify the output directory for the generated transformation files (by default, the
/transform directory).

Page 193 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Click Finish to run the wizard. A confirmation dialog is displayed to indicate the number of files
created.

Page 194 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

CCF: The Transformations Layer


The Transformations Layer is an essential part of the overall CCF infrastructure and tightly
integrated with the Codec Layer. The Transformations Layer can be used both in conjunction with
the Codec Layer, as well as standalone (to build a transformation business rule with the
TransformationRule component, for instance). The Transformations Layer is responsible for
mappings and transformation of input data, using TX files (Transformation Definition files: *.tx, a
custom eIUMXMLfile format that contains the Transformations descriptors) in the IUMStudio
application. The Transformations Layer can be thought of as the "glue" that bridges the format
definition (XFD), and the schema definition (XSD), in either direction (for example, the CCF system
can convert XFD -> XSD or XFD -> XSD).

The Transformations Layer functionality allows transforming (or converting) simple or structural
data into different data types. Transformations are driven by the declarative transformation
directives expressed using the XML-based Transformations Definition language in IUM Studio's TX
files. The following types of transformations are supported by the Transformations Layer:

l Simple -> Simple


l Structural -> Structural
l Simple -> Structural
l Structural -> Simple
l Array of Simple or Structural -> Array of Simple or Structural
l Decoder Output -> Structural
l Structural -> Encoder Input
l Structural -> Flat NME
l Flat NME -> Structural
where "Structural" refers to a Composite transformation, and "Simple" is one of the following:
"byte", "short", "int", "long", "float", "double", "boolean", "char", and "string". The CCF
Transformation Layer supports two kinds of transformations: Simple Transformations or Composite
Transformations. Simple Transformations represent the canned, "out-of-the-box" transformations
from one of the "Simple" data types to another. The CCF Transformations Layer design and
implementation approach is based on the concept of a Simple Transformation and its derivatives. A
Simple Transformation is a hard-coded Java class implementing a transformation of a specific
primitive or object type into another primitive or object type. Composite Transformations,
meanwhile, serve as an extension point in CCF, and represent customized, more complex
transformations that can be created. They are derivatives of Simple or other Composite
Transformations, producing a new transformation out of one or more existing transformations. For

Page 195 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

example, the CCF user can thus utilize a Simple Transformation, build a complex (Composite) one,
write Java code to build a simple Java transformation, or write a script to perform a transformation.

See "Simple Transformations" (on page 196) information on Simple Transformations, and
"Composite Transformations" (on page 240) for details on Composite Transformations.

Simple Transformations
To include a given Simple (Data) Transformer in another, more complex transformation, the <use-
transformer> element is used to specify mandatory attributes for the input type (from), for the
output type (to), and for the transformer local id, qualified name or class name ("class"). The
transformer qualified name is specified by "namespace prefix" : "transformer id". Below is an
example of a Simple Transformer used inside the transformation XML descriptor of a TXfile:
<import ns="http://www.hp.com/usage/datastruct/transform" prefix="tx"
/>
...
<use-transformer from="int" to="long" class="tx:IntToLongCopy" />

See "Language Overview" (on page 199) for more information on the Transformation Definition
language, and "<use-transformer> Class Attribute Values" (on page 207) for a listing of possible
"class" values that correspond to the transformers.

Transformation Parameters
Below is an example of a Simple parameterized transformer usage inside a corresponding
transformation descriptor in a TXfile, expressed in terms of the Transformation Definition
language's <use-transformer>, <params>, and <param> element definitions:
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<use-transformer from="string" to="long" class="tx:DateStringToLong">
<params>
<param name="format" value="dd/mm/yyyy hh:mm:ss"/>
</params>
</use-transformer>

Integrating Custom Transformations


The transformations written in Java and conforming to the specified Transformations API can be
deployed to eIUM as regular JAR files. In addition to the standard (Simple) transformations
packaged with eIUM, new Composite Transformations can be added as class or JAR files under
%BINROOT%/lib/share and registered in the eIUM CLASSPATH.

Transformations Integration with the Codec Layer


Transformations used along with the CCF Codec Layer have specialized interfaces to feed or
consume data to and from the Transformations Layer. This programming interface is natively
recognized and used by the IUM Studio Codec Design view component for generation of the binding
code tying the codec and the transformations together (see "Format Definitions in IUMStudio" (on
page 166) for more information on IUM Studio and the Codec Layer).

The main difference for Transformations used with the Codec Layer is that transformation
processes for decoding are driven by the Codec, not by the Transformations Layer. The Codec Layer

Page 196 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

decides when to create output data structures and control the life cycle for them. Due to this
difference, not all of the Transformations features can be used along with the Codec Layer for
structural transformations:

l Script Transformations are not supported for transforming structural data, but it is still possible
to use Script Transformations for non-structural data. An example of non-structural script
transformation can be found above in the transformation "DateStringToLongScript" in
"Composite Transformations" (on page 240) (see Script Transformations).
l Nested paths for fields, like f1.f2.f3, are not supported in Bean (Structural) Transformation as
input or output fields for decoding or encoding correspondingly.
When generating codec and transformations code, the following scheme is applied:

Codec code -> Codec-Transformations binding <- Transformations code

Transformations and Business Rules: The TransformationRule


The TransfromationRule component (see the eIUMComponent Reference) provides a business rule
chain infrastructure (RuleChain), in order to provide standalone access to the CCF Transformations
Layer to eIUM applications. The TransformationRule accepts flat NME objects and processes them,
namely, allowing transformations between NMEs (structured NMEs/NMEAdapters referred to in the
eIUM Component Reference), and NormalizedMeteredEvent ("Flat" or traditional NMEs) types in
various combinations. The TransformationRule fully relies on the CCF Transformations Layer to
perform such transformations.

The incoming NME to this rule must be an NMEAdapter or a standard NME. You can use this rule to
transform or just copy data values between structured and standard NME parts. The following types
of transformations are supported by the rule: NME -> NME, NME -> NormalizedMeteredEvent,
NormalizedMeteredEvent -> NME and NormalizedMeteredEvent -> NormalizedMeteredEvent.

The transformation can be mutual when the provided input NME or flat NME object is modified and
returned, or may require construction of a new output NME or Flat NME object. The
TransformationRule supports mutual transformation and construction of new output objects as
well. The TransformationRule can also construct new NME and Flat NME objects when needed.

Attribute Transformation
Transformations with NMEs and Flat NMEs are actually transformations of their attributes. NME and
Flat NME attributes are accessed by the CCF Transformations Layer with some key differences
however. NME attributes are accessed as bean attributes, which is supported when NMEs are used.
Attributes of Flat NMEs are accessed using dedicated transformers from the plug-in
com.hp.usage.datastruct.nme.transform.

Transformations Language Descriptor


The TransformationRule uses the CCF Transformations Layer, which performs transformations,
driven by records from the XML descriptor (the Transformations TX XML file modified in IUMStudio).
The descriptor for NME and Flat NME transformations is configured in accordance with the CCF
Transformations language specification (see "Language Overview" (on page 199)), and takes the
following form:
<!-- import NormalizedMeteredEvent transformers -->
<import prefix="attrtx"
ns="http://www.hp.com/usage/datastruct/transform/attr" />

Page 197 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<!-- import namespace and NME types from XSD -->


<import prefix="cdrnme"
ns="http://www.hp.com/usage/nme/nmeschema/CDRSample" />
<!-- import namespace with NormalizedMeteredEvent and its attributes
data types -->
<import prefix="siu" ns="http://www.hp.com/siu/utils" />

<transformer id="CdrInfoToFlatNme" from="cdrnme:CDRInfo"


to="siu:NormalizedMeteredEvent">
<transform input="" output="">
<use-transformer from="long" to="siu:TimeAttribute"
class="attrtx:LongToTimeAttribute" />
</transform>
<transform input="hostNameAttr" output="HostName">
<use-transformer from="string" to="siu:StringAttribute"
class="attrtx:StringToStringAttribute" />
</transform>
</transformer>

This example demonstrates base features of the Transformations for NMEs and Flat NMEs.

Component Configuration
The TransformationRule requires the following information at runtime:

l The input and output object types, defined by the "InputType" and "OutputType" configuration
attributes. The NME object requires namespace and type information, whereas for Flat NMEs it is
sufficient just to mention the class name: NormalizedMeteredEvent.
l The path to the transformations XMLdescriptor, defined by the "TransformationPath"
configuration attribute. This can be a path to the repository server or URI in the file system. For
example, for the repository server, "datamodel/transform", or the file system path with the
prefix "file:///". The path can be provided to a directory as well as a file.
l The identifier of the transformer that will perform the transformation, defined by the
"Transformer" configuration attribute.
All this information is provided by means of the following configuration, for example:
[/deployment/host/server/RuleChains/transform
ClassName=com.hp.usage.datastruct.nme.rule.TransformationRule
InputType=CDRSample:CDRInfo
OutputType=NormalizedMeteredEvent
# transformations descriptor at RepositoryServer
Transformations=/datamodel/transform/NMETransformations.tx
# if transformations descriptor in file-system
# Transformations=file:///C:/IUM/desc/NMETransformations.tx
Transformer=CdrInfoToFlatNme

For additional information, see the TransformationRule component description in the


eIUMComponent Reference.

Page 198 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Language Overview
Much like the XFD Format Definition language, the XML-based Transformations Definition language
has a syntax that requires Transformations descriptors to be well-formed XML documents (TX files).
Besides the base XML rules the language conforms to, the Transformations Definition language has
its own syntax, which defines the possible XML elements and attributes, as well as their
combinations. The validation for such syntax is performed using the repositorymanager utility, as
well as IUMStudio.

In addition to these standard XML validations, the Transformations Definition language has its own
validator written in Java, which performs the following checks:

l Script validation. This is required when the script is written inside of a CDATA element, and should
be validated against the specified script language syntax.
l Data types consistency. The transformations XML descriptor refers to data structures, fields,
and transformers, and requires strong typing by design. Information about types consistency is
performed using types information from the XML descriptor and from Java annotations which
are required for the transformers.
For a description of the Transformation Definition language elements and syntax, see
"Transformations Syntax Reference (TX)" (on page 199).

Transformations Syntax Reference (TX)


The Transformations Language descriptors in TXfiles specify rules of transformation between
structured and primitive types of data. The Transformations Language descriptors are XML
documents with elements derived from the "http://www.hp.com/usage/datastruct/transform" XML
namespace.

<transformations>

The <transformations> element is the root of the Transformations XML descriptor, and encloses
elements with transformation information for structured and primitive data types.

Syntax
<transformations>
(format*,transformer*)
</transformations>

Elements

Element Name Description

<format> Specifies namespace and format type of data.

<transformer> Specifies transformation between data structured and primitive types.

<format>

The <format> element specifies the namespace and format type of structured and primitive types
of data to be transformed.

Syntax

Page 199 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<format
type= xs:string
prefix= xs:string
ns= xs:string />

Elements

Element
Name Description

<type> Specifies the format type of data. The possible formats are: xsd, xfd, nme
and siu-nme.

<ns> Specifies the namespace for the types.

<prefix> Specifies the prefix for the namespace to be used in the transformations.

<transformer>

The <transformer> elements are the main elements of Transformations descriptors, and define
transformations between structured or primitive types of the specified format.

Syntax
<transformer
id?= xs:string
from= xs:string to= xs:string >
(transform*?, script?, chain?, array?)
</transformer>

Attributes

Attribute
Name Description

id Specifies the name of the transformer and allows referencing this transformer
from other transformations.

from Specifies the input type of transformation.

to Specifies the output type of transformation.

Elements

Element
Name Description

<transform> Specifies the transformation for a pair of fields of structured data types. The
transformation is performed by a referenced or nested transformer.

<script> Specifies transformation between structured and primitive types using a script.

<chain> Specifies transformation as a sequence of nested or referenced transformations.

<array> Specifies transformation for array data types.

Page 200 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<transform>

The <transform> element specifies the transformation for a pair of fields from structured data
types. The transformation is performed by a referenced or nested transformer.

Syntax
<transform in= xs:string out= xs:string >
(use-transformer?, transformer?)
</transform>

Attributes

Attribute Name Description

in Specifies the name of the data field to take a value from.

out Specifies the name of the data field to place the transformed value.

Elements

Element Name Description

<use-transformer> References a transformer to perform the value transformation.

<transformer> Defines the nested transformer to perform the value transformation.

<script>

The <script> element specifies transformations between structured and primitive types using a
script. The script can be specified within a CDATA element or referenced from an external source.

Syntax
<script language= xs:string src?= xs:string >
(args(arg*)?, CDATA?)
</script>

Attributes

Attribute Name Description

language Specifies the name of the script language being used.

src Specifies external source for the transformation script to be taken from.

CDATA Specifies the transformation script.

Elements

Element Name Description

<args> Specifies the arguments which this script can accept.

Page 201 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<chain>

The <chain> element specifies single-value transformation as a sequence of other transformations.

Syntax
<chain>
(use-transformer*?, transformer*?)
</chain>

Elements

Element Name Description

<use-transformer> References a transformer to perform the value transformation.

<transformer> Defines a nested transformer to perform the value transformation.

<array>

The <array> element specifies transformations of data arrays.

Syntax
<array>
(use-transformer)
</array>

Elements

Element Name Description

<use- Specifies the transformer to be used for transforming the value of <array>
transformer> elements.

<use-transformer>

The <use-transformer> element defines references an existing <transformer> to perform value


transformation.

Syntax
<use-transformer
class= xs:string
from= xs:string to= xs:string >
(params(param*)?)
</use-transformer>

Attributes

Page 202 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attribute
Name Description

class The qualified name of the transformer, that is, namespace_prefix : transformer_id,
transformer full class name, or local transformer id.

See "<use-transformer> Class Attribute Values" (on page 207) for a listing of possible
transformer classes.

from Specifies the input type of the transformation.

to Specifies the output type of the transformation.

Elements

Element Name Description

param Specifies parameters which the specified transformer can accept.

<param>

The <param> element specifies information about the transformation parameter.

Syntax
<param name= xs:string value= xs:string />

Attributes

Attribute Name Description

name Specifies the name of the parameter.

value Specifies the value of the parameter.

<arg>

The <arg> element specifies information about the argument which transformations can accept.

Syntax
<arg name= xs:string value= xs:string />

Attributes

Attribute Name Description

name Specifies the name of the argument.

type Specifies the type of the argument.

default Specifies the default value of the argument.

description Specifies the description of an argument.

Page 203 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Transformation Example

Assuming input data with the following structure in a binary format:

Sample Format Description


[Header] Type

File Type int

Data format type int

Records Source string

File Sequence Number long

File Creation Type byte

[Records] Array of records with the following structure

Record Type Int, defines type of the record

[FixedLengthCDR] recordType=2

Original Unified Caller ID String

Destination Unified Caller ID String

Audio Codec Int

Termination Reason Int

Connection time Long

[DomesticCDR] recordType=3

Original Caller ID String

Connection time Long

Mobile Call String

See "XFD Syntax Reference" (on page 157) for an example of the corresponding XFD format
definition. The corresponding transformation could be the following:
<?xml version="1.0" encoding="utf-8"?>
<transformations xmlns=http://www.hp.com/usage/datastruct/transform

targetNamespace=http://www.hp.com/usage/datastruct/transform/cdrsample>

<!-- XFD imports -->


<import prefix="xfd"
ns="http://www.hp.com/usage/datastruct/xfd/cdrsample"/>
<!-- XSD NME imports-->
<import prefix="nme"
ns="http://www.hp.com/usage/nme/nmeschema/CDRSample"/>

Page 204 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<!-- Transformations imports-->


<import prefix="basetx"
ns="http://www.hp.com/usage/datastruct/transform"/>

<transformer id="HeaderTransformer" from="xfd:Header"


to="nme:FileHeader">
<transform input="fileType" output="fileType">
<use-transformer from="int" to="long"
class="basetx:IntToLongCopy" />
</transform>
<transform input="fileSeqNum" output="fileSeqNum">
<use-transformer from="int" to="long"
class="basetx:IntToLongCopy" />
</transform>
</transformer>

<transformer id="CDRTransformer" from="xfd:CDR" to="nme:CDRData" >


<transform input="recordType" output="recordType">
<use-transformer from="int" to="int"
class="basetx:IntToIntCopy" />
</transform>
<transform input="fixedLengthCDR" output="fixedLengthCDR">
<use-transformer from="cdrFormat:FixedLengthCDR"
to="cdrNme:FixedLengthCDRData"
class="FixedLengthCDRTransformer" />
</transform>
<transform input="domesticCDR" output="domesticCDR">
<use-transformer from="cdrFormat:DomesticCDR"
to="cdrNme:DomesticCDRData"
class="DomesticCDRTransformer" />
</transform>
</transformer>

<transformer from="xfd:FixedLengthCDR" to="nme:FixedLengthCDRData"


id="FixedLengthCDRTransformer" >
<transform input="originalCallerId" output="originalCallerId">
<use-transformer from="string" to="string"
class="basetx:StringToStringCopy" />
</transform>
<transform input="destinationCallerId"
output="destinationCallerId">
<use-transformer from="string" to="string"
class="basetx:StringToStringCopy" />
</transform>
<transform input="connectionTime" output="connectionTime">
<use-transformer from="long" to="long"
class="basetx:LongToLongCopy" />
</transform>
</transformer>

Page 205 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<transformer id="DomesticCDRTransformer" from="xfd:CDR"


to="nme:DomesticCDRData" >
<transform input="originalCallerId" output="originalCallerId">
<use-transformer from="string" to="string"
class="basetx:StringToStringCopy" />
</transform>
<transform input="connectionTime" output="connectionTime">
<use-transformer from="long" to="long"
class="basetx:LongToLongCopy" />
</transform>
<transform input="mobile" output="mobile">
<use-transformer from="string" to="boolean"
class="basetx:StringToBooleanParse" />
</transform>
</transformer>
</transformations>

Page 206 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<use-transformer> Class Attribute Values

http://www.hp.com/usage/datastruct/transform
Input Output
Class Type Type Description

Base64String string byte[] Decode ASCII string with binary data in textual Base64
ToByteArray format into byte array.

BinaryString string byte Decode string with "0" and "1" characters as binary
ToByte data into byte value. For example, "0101" is decoded
into 5.

BinaryString string int Decode string with "0" and "1" characters as binary
ToInt data into int value.

BinaryString string long Decode string with "0" and "1" characters as binary
ToLong data into long value.

BinaryString string short Decode string with "0" and "1" characters as binary
ToShort data into short value.

BooleanArray boolean[] boolean Return one value, specified by the 'offset' parameter,
ToBoolean from array of boolean values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that the first element of array
is returned.

BooleanObject java.lang boolean Convert Boolean Object into primitive boolean value.
ToBoolean .Boolean

BooleanTo boolean boolean[] Return one element sized boolean array which
BooleanArrayWrap contains only input value.

BooleanTo boolean boolean Return input boolean value. Can be used when value is
BooleanCopy just copied without changes.

BooleanTo boolean java.lang. Convert boolean primitive value into Boolean Object.
BooleanObject Boolean

BooleanTo boolean boolean Return boolean value as a negation operation, that is,
BooleanReverse logical 'NOT', under input boolean value.

Boolean boolean byte Returns 0 or 1, when input boolean is false or true.


ToByte

Boolean boolean string Returns "false" of "true", when input value is false or
ToString true.

Page 207 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

ByteArray byte[] string Encode byte array into ASCII string using Base64
ToBase64String encoding scheme.

ByteArray byte[] byte Return one value, specified by the 'offset' parameter,
ToByte from array of byte values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of the array
is returned.

ByteArrayTo byte[] byte[] Creates new output byte array and copies values from
ByteArrayCopy input array.

ByteArrayToByte byte[] byte[] Returns input byte array. Can be used when creation of
ArrayIdentity new array is not required.

ByteArrayTo byte[] string Encode the input data producing a Hex encoded String.
HexString

ByteArrayTo byte[] string Convert network order binary form to IPv4


IPv4String presentation level address.

ByteArrayTo byte[] string Convert network order binary form to IPv6


IPv6String presentation level address.

ByteArray byte[] int Read 4 bytes from the byte array and convert them to
ToInt int value.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that the first element of the
array is returned.

ByteArray byte[] long Read 8 bytes from the byte array and convert them to
ToLong long value.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

ByteArray byte[] short Read 2 bytes from the byte array and convert them to
ToShort short value.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

ByteArray byte[] string Decode byte array into string value.


ToString

Page 208 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Parameters:

charset - string value specifies a character set to be


used for string decoding.

ByteObject java.lang byte Convert Byte Object into primitive byte value.
ToByte .Byte

ByteTo byte string Encode byte value into binary format as a string with
BinaryString "0" and "1" characters.

ByteTo byte boolean Return boolean "true" or "false" for byte values "1" or
Boolean "0".

ByteTo byte byte[] Return one element sized byte array which contains
ByteArrayWrap only input value.

ByteTo byte byte Return input byte value. Can be used when value is just
ByteCopy copied without changes.

ByteTo byte java.lang Convert byte primitive value into Byte Object.
ByteObject .Byte

ByteTo byte char Return byte value as a char value using type casting.
CharCast

ByteTo byte string Return string with decimal representation of byte


DecimalString value.

ByteTo byte string Return string with hexadecimal representation of byte


HexString value.

ByteTo byte string Return string with octal representation of byte value.
OctalString

ByteTo byte short Return input byte value casted to short type.
ShortCopy

CharArray char[] char Return one value, specified by the 'offset' parameter,
ToChar from array of char values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

CharArrayTo char[] char[] Create new output char array and copy values from
CharArrayCopy input array.

CharArrayTo char[] char[] Returns input char array.


CharArrayIdentity

CharArray char[] string Return string representation of the char array.


ToStringWrap

Page 209 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

CharObject java.lang char Convert Char Object into primitive char value.
ToChar .Character

CharToChar char char[] Return one element sized char array which contains
ArrayWrap only input value.

CharTo char char Return input char value.


CharCopy

CharTo char java.lang Convert primitive char value into Char Object.
CharObject .Character

CharTo char int Return char value as int value using type casting.
IntCast

CharTo char string Return new string which contains only one input char
StringWrap value.

DateString string long Return the number of milliseconds since January 1,


ToLong 1970, 00:00:00 GMT represented by input Date object.

Parameters:

format - string value specifies date format pattern, for


example, yyyy-MM-dd'T'HH:mm:ssZ.

DecimalString string byte Parse the input string as a signed decimal byte value
ToByte and return it.

DecimalString string int Parse the input string as a signed decimal int value and
ToInt return it.

DecimalString string long Parse the input string as a signed decimal long value
ToLong and return it.

DecimalString string short Parse the input string as a signed decimal short value
ToShort and return it.

DoubleArray double[] double Return one value, specified by the 'offset' parameter,
ToDouble from array of double values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

DoubleObject java.lang double Convert Double Object into primitive double value.
ToDouble .Double

DoubleTo double string Return string with decimal representation of double


DecimalString value.

DoubleTo double double[] Return one element sized double array which contains
DoubleArrayWrap only input value.

Page 210 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

DoubleTo double double Return input double value.


DoubleCopy

DoubleTo double java.lang Convert primitive double value into Double Object.
DoubleObject .Double

DoubleTo double float Return double value as a float value using type casting.
FloatTruncate

DoubleTo double string Return string with hexadecimal representation of


HexString double value.

FloatArray float[] float Return one value, specified by the 'offset' parameter,
ToFloat from array of float values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

FloatObject java.lang float Convert Float Object into primitive float value.
ToFloat .Float

FloatTo float string Return string with decimal representation of float


DecimalString value.

FloatTo float double Return input float value as a double value.


DoubleCopy

FloatToFloat float float[] Return one element sized float array which contains
ArrayWrap only input value.

FloatTo float float Return input float value.


FloatCopy

FloatTo float java.lang Convert primitive float value into Float Object.
FloatObject .Float

FloatTo float string Return string with hexadecimal representation of float


HexString value.

HexString string byte Parses the string argument as a signed byte value in
ToByte hexadecimal format.

HexString string byte[] Decode the Hex encoded String data.


ToByteArray

HexString string int Parses the string argument as a signed int value in
ToInt hexadecimal format.

HexString string long Parses the string argument as a signed long value in
ToLong hexadecimal format.

Page 211 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

HexString string short Parses the string argument as a signed short value in
ToShort hexadecimal format.

IPv4String string byte[] Converts IPv4 address in its textual presentation form
ToByteArray into its numeric binary form.

IPv4String string int Converts IPv4 address in its textual presentation form
ToInt into its numeric binary form and return binary as int
value.

IPv6String string byte[] Convert IPv6 presentation level address to network


ToByteArray order binary form.

IntArray int[] int Return one value, specified by the 'offset' parameter,
ToInt from array of int values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

IntObject java.lang int Convert Integer Object into primitive int value.
ToInt .Integer

IntTo int string Return a string representation of the integer


BinaryString argument as an unsigned integer.

IntTo int byte[] Convert int value into byte array. Writes four bytes
ByteArray containing the given int value, in the current byte
order.

IntTo int char Return int value as a char value using type casting.
CharCast

IntTo int string Return string representation of the int argument.


DecimalString

IntTo int string Return string representation of the integer argument


HexString in hexadecimal format.

IntTo int string Encode int value into string as a IPv4 address.
IPv4String

IntToInt int int[] Return one element sized int array which contains only
ArrayWrap input value.

IntTo int int Return input int value.


IntCopy

IntToIntObject int java.lang Convert primitive int value into Integer Object.
.Integer

Page 212 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

IntTo int long Return input int value as a long value.


LongCopy

IntTo int string Return a string representation of the integer


OctalString argument in octal format.

IntToShort int short Return int value as a short value using type casting.
Truncate

LongArray long[] long Return one value, specified by the 'offset' parameter,
ToLong from array of long values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

LongObject java.lang long Convert Long Object into primitive long value.
ToLong .Long

LongTo long string Returns a string representation of the long argument


BinaryString in binary format.

LongTo long byte[] Convert long value into byte array. Write eight bytes
ByteArray containing the given long value, in the current byte
order.

LongTo long string Convert long value with milliseconds since the January
DateString 1, 1970, 00:00:00 GMT into string in accordance with
specified format.

Parameters:

format - string value specifies date format pattern.


The default format is "yyyy-MM-dd'T'HH:mm:ssZ".

LongTo long string Return a string representation of the long argument.


DecimalString

LongTo long string Return a string representation of the long argument


HexString as an unsigned integer in hexadecimal format.

LongTo long int Return long value as int value using type casting.
IntTruncate

LongTo long long[] Return one element sized long array which contains
LongArrayWrap only input value.

LongTo long long Return input long value.


LongCopy

LongTo long java.lang Convert primitive long value into Long Object.
LongObject .Long

LongTo long string Return a string representation of the long value as an

Page 213 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

OctalString unsigned integer in octal format.

ObjectArray java.lang java.lang Return one value, specified by the 'offset' parameter,
ToObject .Object .Object from array of Object values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

ObjectTo java.lang java.lang Return one element sized array of Objects which
ObjectArrayWrap .Object .Object contains only input value.

ObjectTo java.lang java.lang Return input Object value.


ObjectIdentity .Object .Object

ObjectTo java.lang string Return string representation of the input Object using
String .Object toString() method.

OctalString string byte Parses the string argument as a signed byte value in
ToByte the octal format.

OctalString string int Parses the string argument as a signed int value in the
ToInt octal format.

OctalString string long Parses the string argument as a signed long value in
ToLong the octal format.

OctalString string short Parses the string argument as a signed short value in
ToShort the octal format.

ShortArray short[] short Return one value, specified by the 'offset' parameter,
ToShort from array of short values.

Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

ShortObject java.lang short Convert Short Object into primitive short value.
ToShort .Short

ShortTo short string Return a string representation of the short argument


BinaryString as an unsigned short in binary format.

ShortTo short byte[] Convert short value into byte array. Writes two bytes
ByteArray containing the given short value, in the current byte
order.

ShortTo short byte Convert short value into byte value by type casting.
ByteTruncate

ShortTo short string Returns the string representation of the short value.
DecimalString

Page 214 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

ShortTo short string Returns the string representation of the short value in
HexString hexadecimal format.

ShortTo short int Return short value as int value using type casting.
IntCopy

ShortTo short string Returns the string representation of the short value in
OctalString octal format.

ShortToShort short short[] Return one element sized short array which contains
ArrayWrap only input value.

ShortTo short short Return input short value.


ShortCopy

ShortTo short java.lang Convert primitive short value into Short Object.
ShortObject .Short

StringArray string[] string From array of boolean values to one boolean value.
ToString
Parameters:

offset - int value specifies offset inside the input array.


Default '0' value means that first element of array is
returned.

StringTo string boolean


BooleanParse

StringTo string byte[] Encode input string into a sequence of bytes using the
ByteArray given charset, storing the result into a new byte array.

Parameters:

charset - string value specifies a character set to be


used for string encoding.

StringTo string char[] Convert input string to a new character array.


CharArray

StringTo string char Return the first char value from the input string. If
CharTruncate string is null or empty then 'null' char with code
'\u0000' is returned.

StringTo string double Return a new double value initialized to the value
DoubleParse represented by the input string.

StringTo string float Return a new float value initialized to the value
FloatParse represented by the specified String.

StringToString string string[] Return one element sized array of strings which
ArrayWrap contains only input value.

StringTo string string Return new string with copied value of input string.
StringCopy

Page 215 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

StringTo string string Return input string value.


StringIdentity

StringToString string string Return new string with copied value of input string with
LowerCase all characters converted to lower case using the rules
of the default locale.

StringTo string string Return input string value.


StringObject

StringTo string string Return new string with copied value of input string with
StringTrim leading and trailing whitespace omitted.

StringToString string string Return new string with copied value of input string with
UpperCase all characters converted to upper case using the rules
of the default locale.

http://www.hp.com/usage/datastruct/transform/attr
Input Output
Class Type Type Description

BinaryAttribute com.hp.siu byte[] Return byte array value of


ToByteArray .utils.BinaryAttribute BinaryAttribute.

BinaryAttribute com.hp.siu byte[] Return new byte array as a copy of


ToByteArrayCopy .utils.BinaryAttribute byte array value of BinaryAttribute.

ByteArrayTo byte[] com.hp.siu.utils Return new BinaryAttribute backed


BinaryAttribute .BinaryAttribute by input byte array.

ByteArrayTo byte[] com.hp.siu.utils Return new BinaryAttribute backed


BinaryAttributeCopy .BinaryAttribute by a copy of input byte array.

ByteArrayTo byte[] com.hp.siu.utils Read integer value from byte array


IPAddrAttribute .IPAddrAttribute and create IPAddrAttribute.

ByteArrayTo byte[] com.hp.siu.utils Return new IPv6AddAttribute


IPv6AddrAttribute .IPv6AddrAttribute initialized from byte array as
encoded ASCII string form of an
address.

ByteArrayTo byte[] com.hp.siu.collector Return new RangeAttribute


RangeAttribute .aggregator.Range initialized from byte array as
Attribute serialized values of ranges.

ByteArrayTo byte[] com.hp.siu.utils Return new UUIDAttribute initialized


UUIDAttribute .UUIDAttribute from byte array as 16 byte length
UUID.

CharArrayTo char[] com.hp.siu.utils Return new MutableStringAttribute


MutableString .MutableString backed by copy of input char array.
AttributeCopy Attribute

CharArrayTo char[] com.hp.siu.utils Return new StringAttribute backed

Page 216 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

StringAttributeCopy .StringAttribute by a string copied from input char


array.

DoubleArray double[] com.hp.siu.utils Return new ListAttribute with list of


ToListAttribute .ListAttribute DoubleAttributes initialized from
input double array.

DoubleAttribute com.hp.siu.utils double Convert DoubleAttribute to primitive


ToDouble .DoubleAttribute double value.

DoubleTo double com.hp.siu.utils Convert primitive double value to


DoubleAttribute .DoubleAttribute DoubleAttribute.

FloatArrayTo float[] com.hp.siu.utils Return new ListAttribute with list of


ListAttribute .ListAttribute FloatAttributes initialized from input
float array.

FloatAttribute com.hp.siu.utils float Convert FloatAttribute to primitive


ToFloat .FloatAttribute float value.

FloatTo float com.hp.siu.utils Convert primitive float value to


FloatAttribute .FloatAttribute FloatAttribute.

IPAddrAttribute com.hp.siu.utils byte[] Return new byte array with size=4


ToByteArray .IPAddrAttribute initialized from IPAddrAttribute's int
value.

IPAddr com.hp.siu.utils int Convert IPAddrAttribute to int value.


AttributeToInt .IPAddrAttribute

IPAddrAttribute com.hp.siu.utils string Convert IPAddrAttribute to string


ToString .IPAddrAttribute value using IPaddrAttribute.toString()
which produces string representation
of IPv4 address.

IPv6AddrAttribute com.hp.siu.utils byte[] Return byte array which


ToByteArray .IPv6AddrAttribute IPv6AddrAttribute is backed by.

IPv6AddrAttribute com.hp.siu.utils string Return printable string


ToString .IPv6AddrAttribute representation of IPv6AddrAttribute
using its toString() method.

IntArrayTo int[] com.hp.siu.utils Return new ListAttribute with list of


ListAttribute .ListAttribute IntegerAttributes initialized from
input int array.

IntToIP int com.hp.usage.datastruct Return new IPAddrAttribute


AddrAttribute .transform.transformer initialized from input int value.
.attr.IntToIPAddr
Attribute

IntToInteger int com.hp.siu.utils Convert primitive int value into


Attribute .IntegerAttribute IntegerAttribute.

IntToTime int com.hp.siu.utils Return new TimeAttribute initialized

Page 217 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attribute .TimeAttribute from input int value as number of


seconds since January 1, 1970,
00:00:00 GMT.

Integer com.hp.siu.utils int Convert IntegerAttribute to primitive


AttributeToInt .IntegerAttribute int value.

ListAttribute com.hp.siu.utils double[] Return new double array initialized


ToDoubleArray .ListAttribute from list of numeric Attributes.

ListAttribute com.hp.siu.utils float[] Return new float array initialized


ToFloatArray .ListAttribute from list of numeric Attributes.

ListAttribute com.hp.siu.utils int[] Return new int array initialized from


ToIntArray .ListAttribute list of numeric Attributes.

ListAttribute com.hp.siu.utils long[] Return new long array initialized


ToLongArray .ListAttribute from list of numeric Attributes.

ListAttribute com.hp.siu.utils short[] Return new short array initialized


ToShortArray .ListAttribute from list of numeric Attributes.

ListAttribute com.hp.siu.utils string[] Return new string array initialized


ToStringArray .ListAttribute from string representations of
Attributes which ListAttribute
encloses.

LongArray long[] com.hp.siu.utils Return new ListAttribute with list of


ToListAttribute .ListAttribute LongAttributes initialized from input
long array.

LongAttribute com.hp.siu.utils long Convert LongAttribute to primitive


ToLong .LongAttribute long value.

LongTo long com.hp.siu.utils Convert primitive long value to


LongAttribute .LongAttribute LongAttribute.

LongTo long com.hp.siu.utils Return new TimeAttribute initialized


TimeAttribute .TimeAttribute from input long value as number of
milliseconds since January 1, 1970,
00:00:00 GMT.

MutableString com.hp.siu.utils byte[] Return new char array as a copy of


AttributeTo .MutableStringAttribute char array which
CharArrayCopy MutableStringAttribute
ToCharArrayCopy is backed by.

RangeAttribute com.hp.siu.collector byte[] Return byte array returned from


ToByteArray .aggregator.RangeAttribute RangeAttribute.getByteArray() which
includes serialized ranges.

Page 218 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

ShortArrayTo short[] com.hp.siu.utils Return new ListAttribute with list of


ListAttribute .ListAttribute IntegerAttributes initialized from
input short array.

StringArrayTo string[] com.hp.siu.utils Return new ListAttribute with list of


ListAttribute .ListAttribute StringAttributes initialized from input
string array.

StringAttribute com.hp.siu.utils char[] Return new char array converted


ToCharArrayCopy .StringAttribute from string value of StringAttribute.

StringAttribute com.hp.siu.utils string Convert StringAttribute to string.


ToString .StringAttribute

StringToIP string com.hp.siu.utils Return new IPAddrAttribute


AddrAttribute .IPAddrAttribute initialized from input string value
containing IPv4 address.

StringToIPv6 string com.hp.siu.utils Return new IPv6AddrAttribute


AddrAttribute .IPv6AddrAttribute initialized from input string
containing valid IPv6 address
notation.

StringTo string com.hp.siu.utils Convert string to StringAttribute.


StringAttribute .StringAttribute

StringTo byte[] com.hp.siu.utils Return new UUIDAttribute initialized


UUIDAttribute .UUIDAttribute from string value as ASCII
representation of hexadecimal bytes.

StringToUnicode string com.hp.siu.utils Return new UnicodeStringAttribute


StringAttribute .UnicodeString initialized from input string as
Attribute UNICODE string value.

TimeAttribute com.hp.siu.utils int Return int value as number of


ToInt .TimeAttribute seconds since January 1, 1970,
00:00:00 GMT.

TimeAttribute com.hp.siu.utils long Return long value as number of


ToLong .TimeAttribute seconds since January 1, 1970,
00:00:00 GMT.

UUIDAttribute com.hp.siu.utils byte[] Return byte array which


ToByteArray .UUIDAttribute UUIDAttribute is backed by.

UUIDAttribute com.hp.siu.utils string Return a string representation of


ToString .UUIDAttribute UUIDAttribute as printable string
containing the byte array in the
standard UUID format.

UnicodeString com.hp.siu.utils string Convert UnicodeStringAttribute to


AttributeToString .UnicodeStringAttribute string.

Page 219 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

http://www.hp.com/usage/datastruct/transform/nme
Input Output
Class Type Type Description

BooleanArrayTo boolean[] com.hp.usage Return new BooleanArray object


BooleanArrayWrapper .array.BooleanArray which encloses input boolean
array.

BooleanArrayTo boolean[] com.hp.usage Return new BooleanArray object


BooleanArrayWrapperCopy .array.BooleanArray which encloses copy of input
boolean array.

BooleanArrayTo boolean[] com.hp.usage Return new BooleanMutableArray


BooleanMutableArray .array.Boolean object which encloses copy of
WrapperCopy MutableArray input boolean array.

BooleanArrayWrapper com.hp.usage boolean[] Return boolean array enclosed by


ToBooleanArray .array.BooleanArray BooleanArray object or new copy
of this array if length of
BooleanArray data is shorter than
its capacity.

BooleanArrayWrapper com.hp.usage boolean[] Return new boolean array which is


ToBooleanArrayCopy .array.BooleanArray a copy of boolean array enclosed
by BooleanArray object.

ByteArrayToByte byte[] com.hp.usage Return new ByteArray object


ArrayWrapper .array.ByteArray which encloses input byte array.

ByteArrayToByte byte[] com.hp.usage Return new ByteArray object


ArrayWrapperCopy .array.ByteArray which encloses copy of input byte
array.

ByteArrayToByte byte[] com.hp.usage Return new ByteMutableArray


MutableArrayWrapperCopy .array.Byte object which encloses copy of
MutableArray input byte array.

ByteArrayWrapper com.hp.usage byte[] Return byte array enclosed by


ToByteArray .array.ByteArray ByteArray object or new copy of
this array if length of ByteArray
data is shorter than its capacity.

ByteArrayWrapper com.hp.usage byte[] Return new byte array which is a


ToByteArrayCopy .array.ByteArray copy of byte array enclosed by
ByteArray object.

CharArrayToChar char[] com.hp.usage Return new CharArray object


ArrayWrapper .array.CharArray which encloses input char array.

CharArrayToChar char[] com.hp.usage Return new CharArray object


ArrayWrapperCopy .array.CharArray which encloses copy of input char
array.

CharArrayToChar char[] com.hp.usage Return new CharMutableArray

Page 220 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

MutableArrayWrapperCopy .array.Char object which encloses copy of


MutableArray input char array.

CharArrayWrapper com.hp.usage char[] Return char array enclosed by


ToCharArray .array.CharArray CharArray object or new copy of
this array if length of CharArray
data is shorter than its capacity.

CharArrayWrapper com.hp.usage char[] Return new char array which is a


ToCharArrayCopy .array.CharArray copy of char array enclosed by
CharArray object.

DoubleArrayTo double[] com.hp.usage Return new DoubleArray object


DoubleArrayWrapper .array.DoubleArray which encloses input double array.

DoubleArrayTo double[] com.hp.usage Return new DoubleArray object


DoubleArrayWrapperCopy .array.DoubleArray which encloses copy of input
double array.

DoubleArrayToDouble double[] com.hp.usage Return new DoubleMutableArray


MutableArrayWrapperCopy .array.Double object which encloses copy of
MutableArray input double array.

DoubleArrayWrapper com.hp.usage double[] Return double array enclosed by


ToDoubleArray .array.DoubleArray DoubleArray object or new copy of
this array if length of DoubleArray
data is shorter than its capacity.

DoubleArrayWrapper com.hp.usage double[] Return new double array which is


ToDoubleArrayCopy .array.DoubleArray a copy of double array enclosed by
DoubleArray object.

FloatArrayToFloat float[] com.hp.usage Return new FloatArray object


ArrayWrapper .array.FloatArray which encloses input float array.

FloatArrayToFloat float[] com.hp.usage Return new FloatArray object


ArrayWrapperCopy .array.FloatArray which encloses copy of input float
array.

FloatArrayToFloat float[] com.hp.usage Return new FloatMutableArray


MutableArrayWrapperCopy .array.Float object which encloses copy of
MutableArray input float array.

FloatArrayWrapper com.hp.usage float[] Return float array enclosed by


ToFloatArray .array.FloatArray FloatArray object or new copy of
this array if length of FloatArray
data is shorter than its capacity.

FloatArrayWrapper com.hp.usage float[] Return new float array which is a


ToFloatArrayCopy .array.FloatArray copy of float array enclosed by
FloatArray object.

Page 221 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

IntArrayToInteger int[] com.hp.usage Return new IntegerArray object


ArrayWrapper .array.IntegerArray which encloses input int array.

IntArrayToInteger int[] com.hp.usage Return new IntegerArray object


ArrayWrapperCopy .array.IntegerArray which encloses copy of input int
array.

IntArrayToInteger int[] com.hp.usage Return new IntegerMutableArray


MutableArrayWrapperCopy .array.Integer object which encloses copy of
MutableArray input int array.

IntegerArrayWrapper com.hp.usage int[] Return int array enclosed by


ToIntArray .array.IntegerArray IntegerArray object or new copy of
this array if length of IntegerArray
data is shorter than its capacity.

IntegerArrayWrapper com.hp.usage int[] Return new int array which is a


ToIntArrayCopy .array.IntegerArray copy of int array enclosed by
IntegerArray object.

LongArrayToLong long[] com.hp.usage Return new LongArray object


ArrayWrapper .array.LongArray which encloses input long array.

LongArrayToLong long[] com.hp.usage Return new LongArray object


ArrayWrapperCopy .array.LongArray which encloses copy of input long
array.

LongArrayToLong long[] com.hp.usage Return new LongMutableArray


MutableArrayWrapperCopy .array.Long object which encloses copy of
MutableArray input long array.

LongArrayWrapper com.hp.usage long[] Return long array enclosed by


ToLongArray .array.LongArray LongArray object or new copy of
this array if length of LongArray
data is shorter than its capacity.

LongArrayWrapper com.hp.usage long[] Return new long array which is a


ToLongArrayCopy .array.LongArray copy of long array enclosed by
LongArray object.

NMEArrayTo com.hp.usage com.hp.usage Return new NME array with copy


NMEArrayDeepCopy .nme.NME[] .nme.NME[] of values from input NME array. An
expensive operation as it creates
a deep copy of all the references
in the NME.

NMEArrayToNME com.hp.usage com.hp.usage Return new NME array as a copy


ArrayShallowCopy .nme.NME[] .nme.NME[] of input NME array. Simply copies
array values without creating a
copy of them.

Page 222 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

NMEArrayToNME com.hp.usage com.hp.usage Return new NMEArray object with


ArrayWrapper .nme.NME[] .nme.NMEArray input NME array as underlying
storage.

NMEArrayWrapper com.hp.usage com.hp.usage Return NME array enclosed by


ToNMEArray .nme.NMEArray .nme.NME[] NMEArray object or new copy of
this array if length of NMEArray
data is shorter than its capacity.

NMEToNMECopy com.hp.usage com.hp.usage Return a new copy of input NME.


.nme.NME .nme.NME This is an expensive operation as
it creates a deep copy of all the
references in input NME.

NMEToNMEIdentity com.hp.usage com.hp.usage Return input NME object.


.nme.NME .nme.NME

ShortArrayToShort short[] com.hp.usage Return new ShortArray object


ArrayWrapper .array.ShortArray which encloses input short array.

ShortArrayToShort short[] com.hp.usage Return new ShortArray object


ArrayWrapperCopy .array.ShortArray which encloses copy of input short
array.

ShortArrayToShort short[] com.hp.usage Return new ShortMutableArray


MutableArrayWrapperCopy .array.Short object which encloses copy of
MutableArray input short array.

ShortArrayWrapper com.hp.usage short[] Return short array enclosed by


ToShortArray .array.ShortArray ShortArray object or new copy of
this array if length of data of
ShortArray is shorter than its
capacity.

ShortArrayWrapper com.hp.usage short[] Return new short array which is a


ToShortArrayCopy .array.ShortArray copy of short array enclosed by
ShortArray object.

StringArrayTo string[] com.hp.usage Return new StringArray object


StringArrayWrapper .array.StringArray which encloses input string array.

StringArrayTo string[] com.hp.usage Return new StringArray object


StringArrayWrapperCopy .array.StringArray which encloses copy of input
string array.

StringArrayToString string[] com.hp.usage Return new StringMutableArray


MutableArrayWrapperCopy .array.String object which encloses copy of
MutableArray input string array.

StringArray com.hp.usage string[] Return string array enclosed by


WrapperToStringArray .array.StringArray StringArray object or new copy of
this array if length of StringArray
data is shorter than its capacity.

Page 223 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

StringArrayWrapper com.hp.usage string[] Return new string array which is a


ToStringArrayCopy .array.StringArray copy of string array enclosed by
StringArray object.

Integration with the Schema Layer (XSD)


In CCF, XSD is used to define structural data types. The XSD-defined structural data types are
supported in the Transformations Layer and can be defined as input and output objects. Connection
with XSD-defined data types is performed using the <import> element, which performs import of
the specified NME namespace with its NME types. For example the following is defined in the TX file:
<!-- Transformers import -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<!-- NME namespace with types defined in XSD import -->
<import prefix="cdrnme"
ns="http://www.hp.com/usage/nme/nmeschema/CDRSample" />

<transformer id="CdrAToCdrBTransformer" from="cdrnme:CdrA"


to="cdrnme:CdrB">
<transform input="f1" output="f2">
<use-transformer from="int" to="long" class="tx:IntToLongCopy" />
</transform>
...
</transformer>

See "Transformations Syntax Reference (TX)" (on page 199) for more information on the <format>
and <import> elements.

Integration with Format Definitions (XFD)


As mentioned previously, XFD is the language used in CCF for defining data formats.
Transformations are required by XFD for mapping between the decoded/encoded values and data
structures containing them. XFD types are imported by the Transformations Definition language
<import> element, and the "ns" attribute matches the "targetNamespace" of the required format.
The presence of XFD types in the "from" or "to" attributes of the <transformer> defines the
transformation for decoding and encoding correspondingly. The following is a sample of such a
declaration in a Transformations definition (TX) file:
<!-- Transformers import -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<!-- Format types namespace import -->
<import prefix="xfd" ns="http://www.hp.com/usage/datastruct/xfd/CDR"
/>
<!-- Data types namespace import -->
<import prefix="nme" ns="http://www.hp.com/usage/nme/nmeschema/CDR" />

<transformer id="CdrDataToCdrInfo" from="xfd:CDRData"


to="nme:CDRInfo">
<transform input="timestamp" output="starttime">
<use-transformer from="long" to="long" class="tx:LongToLongCopy" />

Page 224 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

</transform>
...
</transformer>

See "Transformations Syntax Reference (TX)" (on page 199) for more information on the
<transformer> and <import> elements.

Transformation Definitions in IUMStudio


When viewing a Transformation Definition (whether .tx or .xml file extension) file, the Studio
includes a multi-page editor with two tab views: Source and Design (which correspond to the sub-
tabs you can click to open each view when you have a Transformation Definition file open in the
Studio). The Source tab of aTX file provides XML source editing capabilities with syntax highlighting,
code completion, formatting, validation, and code assist. The Design tab includes graphical controls
for presentation, creation, and editing of transformations elements. Changes you apply on any of
the tabs are synchronized and reflected across the other tabs.

For more information about starting IUMStudio, see the section "Starting IUM Studio" in "Format
Definitions in IUMStudio" (on page 166).

Source Tab
The Source tab allows you to edit the TX file in text-only view where you can edit the XML source
directly.

Meanwhile, you can get code pop-up hints, code assist, and validation in the same manner described
for the Source tab in "Format Definitions in IUMStudio" (on page 166).

Page 225 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Design Tab
The Design tab consists of two views: the first main view when you open a Transformations
Definition file, and the Transformers view, which is accessed by clicking any of the graphical
transformer entries. First, the main view contains two tables: Imports and Transformers. The
following diagram provides an orientation to the Design tab main view interface aspects.

Main View

The Imports table contains all imports of types and transformers, declared in the <import>
elements. The table consists of two columns:

l PrefixThe <import> element's "prefix" attribute, which is a unique editable text field. To edit
the field, first left-click the field, then click again to edit the text box if you need to change the
value.
l NamespaceThe <import> element namespace identifier. This field is a drop-down list box
containing all known namespaces. To select a new entry, first left-click the field, and then click
again to make the drop-down list available for a selection change.
The Transformers table contains all named transformers, declared in the top-level <transformer>
and subsequent nested <transform> and <use-transformer> elements. This table contains only the
top-level, named transformers. All nested transformers are anonymous and do not have an
associated "id". The table consists of three columns:

l Transformer IDA unique editable text field that indicates the <transformer> element "id"
attribute. This can be edited by left-clicking, then clicking again to make the field available for
editing.

Page 226 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l From typeRepresents a drop-down selection list containing all imported <use-transformer>


"from" attribute types, declared in the imports table section, with corresponding input types of
transformations.
l To typeRepresents a drop-down selection list containing all imported <use-transformer> "to"
attribute types, declared in the imports table, with corresponding output types of
transformations. This drop-down list contains the same types as the "from" type. In these two
fields you can choose input (from) and output (to) types.
The main view version of the Palette contains the following controls:

l Add ImportAdd new row to the Imports table.


l Create TransformerLaunches the Create Transformer Wizard. This displays allows you to
create a new transformer by entering the corresponding Transformer id, From type, and To
type selections in the dialog.

After populating the new transformer options and clicking Finish, the transformer appears as
another row in the Transformers table.
l Create Reverse TransformerLaunches the Create Reverse Transformer Wizard for the two
types declared in the selected transformer. When creating a Reverse Transformer, you only need
populate the Transformer id field. This control is only active when a transformer is selected and
no reverse transformers are found for the selected transformer.

Page 227 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l DeleteClick the Delete button to either remove an import row, or a transformer row if one is
selected.

Transformers View

Double-clicking on a transformer row switches the page from the main view to the Transformers
view.

Page 228 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The Transformers view consists of two trees representing the From type (left tree) and To (right
tree) type. The trees contain the type name and types attribute names and attribute types (if any).
The types and attributes are linked with arrows, representing the <script>, <chain>, <use-
transformer>, and <array> elements (as well as inline arrays and transformers). From this view the
Palette contains the following controls:

l DeleteDelete the selected arrow for the corresponding mapping.


l ScriptAllows drawing a <script> arrow from either "From" type to "To" type, or from "From"
type attribute to "To" type attribute (double-clicking on the <script> arrow switches the current
view to the corresponding script-specific view).
l ChainAllows drawing a <chain> arrow from either "From" type to "To" type, or from "From" type
attribute to "To" type attribute (double-clicking on the <chain> arrow switches the current view
to the corresponding Chain-specific view).
l Use transformerAllows drawing a <use-transformer> arrow from either "From" type to "To"
type, or from "From" type attribute to "To" type attribute (double-clicking on the <use-
transformer> arrow switches the current view to the corresponding use-transformer view).
l Inline transformerAllows drawing a <transformer> arrow from either "From" type to "To" type,
or from "From" type attribute to "To" type attribute (double-clicking on the inline <transformer>
arrow switches the current view to the corresponding transformer view).

Page 229 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l ArrayAllows drawing an <array><use-transformer> arrow from either "From" type to "To" type,
or from "From" type attribute to "To" type attribute (double-clicking on the <array><use-
transformer> arrow switches the current view to the corresponding use-transformer view).
l Inline arrayAllows drawing a <transform><transformer><array> arrow from "From" type
attribute to "To" type attribute (double-clicking on the <transform><transformer><array> arrow
switches the current view to the corresponding transformer view).
To draw a transformer form the palette, first select one of the above chosen transformer actions
from the Palette (for example, the following figure shows a <use-transformer> element being
drawn):

Next, select the field from the left (From type) tree, and draw the line to the right (To type) tree.

Page 230 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

As shown in the above figure, the mouse cursor will change to indicate that an insertion point is
available for you to start drawing a line from a field on the left tree to a node on the right tree.

Page 231 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

After the line is draw, left-click to drop the line in place on the selected field. The line itself will then
indicate the transformer.

Page 232 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

When you select and double-click any of the transformation arrows from the main view, you can go
to a successive page view for the selected transformer and change its properties, or you can change
transformers from the main view (via the arrow), as shown below.

Page 233 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The following figure shows a Use transformer-specific view (corresponding to the <use-
transformer> element) where a different transformation class can be selected, or the transformer
itself can be deleted via the Palette. To make the field available for editing, first select the field,
then left-click again to make the drop-down list appear as shown below.

Page 234 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

When finished, click the Back arrow to return to the Transformers (the From-To tree) view.
Transformer views specific to the Array, Chain, and Script transformations are also accessible when
clicking the corresponding line. The Array-specific view (for the <array> element) is similar in
function to the <use-transformer>, allowing editing the text field to specify the array property.

Chain view represents the <chain> element and lists all declared <use-transformer> and
<transformer> elements. You can draw a <chain> element in the same manner as described above
for a <use-transformer>.

Page 235 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Double-clicking the Chain transformer then takes you to the Chain transformer-specific view.

The Chain view palette contains the following controls:

l DeleteDelete the selected transformer.


l Inline transformerAdds a new inline <transformer> element to the table.
l Use transformerAdds a new <use-transformer> element to the table.
When populate with a chain, the view consists of a single table with three columns:

Page 236 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Transformer IDThis first column has a drop-down selection list containing all known
transformers. If this field is empty, the transformer is considered to be anonymous
(<transformer> element without id attribute). If this field is not empty, the transformer is
considered to be reference (<use-transformer>). Double-clicking on a transformer row switches
the current view to the selected transformer view, where you can further edit properties.
Double-clicking on this row switches the current view to the use-transformer view, which allows
you to add/edit the transformers arguments.
l From typeA text entry field representing the From type.
l To typeA drop-down selection list field representing the To type. First left-click, and then click
again to make the field available for editing.

The view contains a Back navigation button, which you can click to switch the current view back to
the Transformers (the From-To tree) view.

Script view represents the <script> element and contains a text editor for writing scripts, a drop-
down selection list for selecting the script language (MVEL or Groovy), text fields that can be edited
for specifying the script source URI, and Palette actions for defining script arguments. You can draw
a <script> element in the same manner as described above for a <use-transformer> or <chain>, as
shown below.

Page 237 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Double-clicking the Script transformer then takes you to the Script transformer-specific view.

Click Add argument to open a text-field entry box where you can specify arguments, or Add script,
which opens a script editor in the lower pane where you can enter your script code.

Page 238 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

As with the other views, after done editing, click the Back navigation button to switch the current
view back to the Transformers (the From-To tree) view. For any of the Transformers view edits you
make, you can always check the Source tab view to see if the built-in Studio validation engine has
found any errors.

You can also check the Error Log for issues in the validation as well:

Page 239 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Composite Transformations
Composite Transformations are derivatives of the Simple or other Composite Transformations
producing a new transformation out of one or more existing transformations. The following types of
Composite Transformations are:

l Chain Transformations
l Array Transformations
l Structural Transformations
l Script Transformations

Chain Transformations
A Chain Transformation is the simplest Composite Transformation combining several (element)
transformations sequentially into a chain. For example, given three transformations such as: "A ->
B", "B -> C" and "C -> D", combining those into a chain will result in a new derived transformation,
namely: "A -> D". One restriction of the Chain Transformation is that each element transformation
output type has to match the next element transformation input type, except for the last element
of the chain.

The corresponding Transformation Definition language <transformer> descriptor in the TXfile


would then be the following:
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<transformer id="StringToHexString" from="string" to="string">

Page 240 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<chain>
<use-transformer from="string" to="int"
class="tx:DecimalStringToInt"/>
<use-transformer from="int" to="long" class="tx:IntToLongCopy"/>
<use-transformer from="long" to="string"
class="tx:LongToHexString"/>
</chain>
</transformer >

Array Transformations
Array Transformations are repetitive transformations of array elements applying the same
transformation to each element. Assuming a given transformation of "X -> Y", it could be used to
transform an array of type "X" to an array of type "Y", namely: "X[] -> Y[]".

The corresponding Transformation Definition language <transformer> descriptor to achieve this is


the following:
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<transformer id="IntArrayToLongArrayCopy" from="int[]" to="long[]">
<array>
<use-transformer from="int" to="long" class="tx:IntToLongCopy"/>
</array>
</transformer>

Array Transformations Parameters

An Array Transformation instance can take parameters that specify the range of the elements to
apply the transformation to. These parameters are:

l inputPosSpecifies the starting position in the input array.


l outputPosSpecifies the starting position in the output array.
l lengthSpecifies the number of array elements to be transformed.
The following is an example of how an array transformer could be invoked in a TXfile with specifying
optional parameters, expressed in Transformation Definition language format:
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<use-transformer from="int" to="long"
class="tx:IntArrayToLongArrayCopy">
<params>
<param name="inputPos" value="1"/>
<param name="outputPos" value="2"/>
<param name="length" value="10"/>
</params>
</use-transformer>

Page 241 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

In the above example, the "IntArrayToLongArrayCopy" transformer class (defined earlier) is tasked
to transform 10 integers from the input array, starting from index 1 (the second <param> element),
into 10 longs, and then store those to the resulting arrays starting from index 2 (the third <param>
element). See "Transformations Syntax Reference (TX)" (on page 199) for more information on
Transformation Definition language syntax, and "<use-transformer> Class Attribute Values" (on
page 207) for the allowable <use-transformer> classes.

Structural Transformations
A Structural Transformation is a Composite Transformation combining multiple transformations of
the individual structural elements (attributes) together. For example, assuming that structure "X"
consists of attributes types "A", "B" and "C". Furthermore, assume another structure "Y" consists of
attributes of types "D", "E", and "F" accordingly. In order to transform structure "X" into "Y", the
following attribute transformations would be required: "A -> D", "B -> E", and "C -> F". Attribute
transformations are also applied exactly in the same order as those configured.

A similar (but a bit more complex) structural transformation could be described in the following way,
in terms of the Transformation Definition Language:
<import prefix="beans"
ns="http://www.hp.com/usage/datastruct/beans/examplebeans" />
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<transformer id="BeanXToBeanY" from="beans:BeanX" to="beans:BeanY">
<!-- BeanX has a substructure "name", which has "fullName"
string attribute among the others -->
<transform input="name.fullName" output="name">
<use-transformer from="string" to="string"
class="tx:StringToStringIdentity" />
</transform>
<!-- BeanY has a substructure "id", which has "key" integer
attribute among the others -->
<transform input="id" output="id.key">
<use-transformer from="int" to="int" class="tx:IntToIntCopy" />
</transform>
<!-- BeanY has a substructure "id", which has "timestamp"
string attribute among the others -->
<transform input="timestamp" output="id.timestamp">
<use-transformer from="long" to="string"
class="tx:LongToDecimalString" />
</transform>
</transformer>

Array Attributes

Structural Transformations consist of a set of subordinate transformations of the structure


attributes. If structure "X" had attributes "a", "b" and "c" of A, B and C types accordingly, and
structure "Y" had attributes "d", "e" and "f" of D, E, F types, then the transformation of X into Y
would look like the following set of transformations A -> D, B -> E and C -> F applied to the pairs of
attributes {"a","d"}, {"b","e"}, {"c","f"}. If we omit type information, then it would look like the
following: {"a" -> "d", "b" -> "e", "c" -> "f"}.

Page 242 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

If any of these attributes "a", "b", "c", "d", "e" or "f" was an array, it would be desired to refer to an
individual element of such array as part of the Structural Transformation. For instance, if attribute
"b" had array type B[], and attribute "e" had regular type E, it would be possible to perform the
following the transformation: {"b[1]" -> "e"}, where "b[1]" is the second element of the attribute "b"
and it has type B. For this, transformation B -> E could be applied. The CCF Transformations Layer
supports such array references for Structural Transformations.

NOTE: Leaf attribute references are supported, as well as more advanced support including
attributes representing arrays of structures and references to the nested structures elements
through a concrete array element. For example: {"b[1].k" -> "e"}. However, the opposite is not
supported.

The following Transformations Definition language descriptor fragment demonstrates how data
from NME of "CDR" type is transformed into data in NME of "CallInfo" type, converting three
elements of an array represented by the "data" attribute into a call info structure (attribute
"original", "destination", and "duration"):
<!-- transformers import -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<!-- NME types import -->
<import prefix="nmes"
ns="http://www.hp.com/usage/nme/nmeschema/ExampleNamespace" />
<transformer id="CDR2UserData" from="nmes:CDR" to="nmes:CallInfo">
<transform input="data[0]" output="original">
<use-transformer from="string" to="string"
class="tx:StringToStringCopy" />
</transform>
<transform input="data[1]" output="destination">
<use-transformer from="string" to="string"
class="tx:StringToStringCopy" />
</transform>
<transform input="data[2]" output="duration">
<use-transformer from="string" to="int"
class="tx:DecimalStringToInt" />
</transform>
</transformer>

NME Arrays (com.hp.usage.array Package)

Structures that implement the com.hp.usage.nme.NME interface use specialized array wrapper
objects that can be used by SNME arrays (defined in the com.hp.usage.array package) to handle
the array types. The following transformations exist for each array type, where the following are
read in terms of the input type -> output type, and the <use-transformer> "class" attribute, that is,
the transformer after the colon (also see "<use-transformer> Class Attribute Values" (on page 207)
for a list of the transformer values taken by the "class" attribute):

l ByteArray -> byte[]: ByteArrayWrapperToByteArray


l byte[] -> ByteArray: ByteArrayToByteArrayWrapper
l byte[] -> ByteMutableArray: ByteArrayToByteMutableArrayWrapper
l ShortArray -> short[]: ShortArrayWrapperToShortArray

Page 243 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l short[] -> ShortArray: ShortArrayToShortArrayWrapper


l short[] -> ShortMutableArray: ShortArrayToShortMutableArrayWrapper
l IntegerArray -> int[]: IntegerArrayWrapperToIntArray
l int[] -> IntegerArray: IntArrayToIntegerArrayWrapper
l int[] -> IntegerMutableArray: IntArrayToIntegerMutableArrayWrapper
l LongArray -> long[]: LongArrayWrapperToLongArray
l long[] -> LongArray: LongArrayToLongArrayWrapper
l long[] -> LongMutableArray: LongArrayToLongMutableArrayWrapper
l FloatArray -> float[]: FloatArrayWrapperToFloatArray
l float[] -> FloatArray: FloatArrayToFloatArrayWrapper
l float[] -> FloatMutableArray: FloatArrayToFloatMutableArrayWrapper
l DoubleArray -> double[]: DoubleArrayWrapperToDoubleArray
l double[] -> DoubleArray: DoubleArrayToDoubleArrayWrapper
l double[] -> DoubleMutableArray: DoubleArrayToDoubleMutableArrayWrapper
l CharArray -> char[]: CharArrayWrapperToCharArray
l char[] -> CharArray: CharArrayToCharArrayWrapper
l char[] -> CharMutableArray: CharArrayToCharMutableArrayWrapper
l BooleanArray -> boolean[]: BooleanArrayWrapperToBooleanArray
l boolean[] -> BooleanArray: BooleanArrayToBooleanArrayWrapper
l boolean[] -> BooleanMutableArray: BooleanArrayToBooleanMutableArrayWrapper
l StringArray -> String[]: StringArrayWrapperToStringArray
l String[] -> StringArray: StringArrayToStringArrayWrapper
l String[] -> StringMutableArray: StringArrayToStringMutableArrayWrapper
l NMEArray -> NME[]: NMEArrayWrapperToNMEArray
l NME[] -> NMEArray: NMEArrayToNMEArrayWrapper
l NME[] -> NMEMutableArray: NMEArrayToNMEMutableArrayWrapper
l NME[] -> NMEMutableArray: NMEArrayToNMEMutableArrayWrapper
For example, in order to transform an array of integers into a StringArray attributeconverting
each integer into a hexadecimal stringthe standard IntToHexStringCopy and
StringArrayToStringArrayWrapper transformers would be used like the following in the
Transformations Definition language:
<!-- Transformers imports -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<import prefix="nmetx"

Page 244 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

ns="http://www.hp.com/usage/datastruct/transform/nme" />
<!-- nme types import -->
<import prefix="nme" ns="http://www.hp.com/usage/nme/nmeschema" />
<transformer id="IntArrayToHexStringArrayWrapper" from="int[]"
to="nme:string-array">
<chain>
<transformer from="int[]" to="string[]">
<array>
<use-transformer from="int" to="string"
class="tx:IntToHexStringCopy"/>
</array>
</transformer>
<use-transformer from="string[]" to="nme:string-array"
class="nmetx:StringArrayToStringArrayWrapper"/>
</chain>
</transformer>

As an additional example, if an IntegerArray attribute needed to be transformed into a StringArray


attribute, according to the same rules, a pair of standard IntegerArrayWrapperToIntArray and
StringArrayToStringArrayWrapper transformers would accompany the IntToHexStringCopy
transformer like the following:
<!-- Transformers imports -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<import prefix="nmetx"
ns="http://www.hp.com/usage/datastruct/transform/nme" />
<!-- nme types import -->
<import prefix="nme" ns="http://www.hp.com/usage/nme/nmeschema" />
<transformer id="IntegerArrayWrapperToHexStringArrayWrapper"
from="nme:int-array" to="nme:string-array">
<chain>
<use-transformer from="nme:int-array" to="int[]"
class="nmetx:IntegerArrayWrapperToIntArray"/>
<transformer from="int[]" to="string[]">
<array>
<use-transformer from="int" to="string"
class="tx:IntToHexStringCopy"/>
</array>
</transformer>
<use-transformer from="string[]" to="nme:string-array"
class="nmetx:StringArrayToStringArrayWrapper"/>
</chain>
</transformer>

NormalizedMeteredEvent (Flat NMEs)

Flat NMEs, which are represented by the com.hp.siu.utils.NormalizedMeteredEvent class in eIUM,


provides a flat hash map-like storage mechanism and data access for the structural objects. Used
extensively across eIUM business logic, the CCF Transformations Layer fully supports such
structures. Each attribute type of NormalizedMeteredEvent is represented by a specialized

Page 245 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

attribute class (named accordingly following the naming scheme of N Attribute for N data type). The
following list of attributes is currently used in eIUM:

l IntegerAttribute
l LongAttribute
l FloatAttribute
l DoubleAttribute
l BinaryAttribute
l TimeAttribute
l StringAttribute
l UnicodeStringAttribute
l MutableStringAttribute
l UUIDAttribute
l IPAddrAttribute
l IPv6AddrAttribute
l ListAttribute
l RangeAttribute
For each of these attributes, the corresponding Simple Transformations are provided. For example,
BinaryAttribute would have the following four standard Simple Transformations defined:

l BinaryAttribute -> byte[]: BinaryAttributeToByteArray, BinaryAttributeToByteArrayCopy


l byte[] -> BinaryAttribute: ByteArrayToBinaryAttribute, ByteArrayToBinaryAttributeCopy
Below is a Transformations Definition language format example of how a hexadecimal string could
be transformed into a BinaryAttribute, using a pair of HexStringToByteArray and
ByteArrayToBinaryAttribute Simple Transformations:
<!-- transformers imports -->
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
<import prefix="attrtx"
ns="http://www.hp.com/usage/datastruct/transform/attr" />
<!-- data types imports -->
<import prefix="attr" ns="http://www.hp.com/siu/utils" />
<transformer id="HexStringToBinaryAttribute" from="string"
to="attr:BinaryAttribute">
<chain>
<use-transformer from="string" to="byte[]"
class="tx:HexStringToByteArray"/>
<use-transformer from="byte[]" to="attr:BinaryAttribute"
class="attrtx:ByteArrayToBinaryAttribute"/>
</chain>
</transformer>

Page 246 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

A structural transformation involving NormalizedMeteredEvent handling, meanwhile, looks like any


other Structural Transformation, with the exception of not allowing usage of the nested attributes
on the Flat NME side of the transformation:
<!-- transformers imports -->
<import prefix="attrtx"
ns="http://www.hp.com/usage/datastruct/transform/attr" />
<!-- data types imports -->
<import prefix="attr" ns="http://www.hp.com/siu/utils" />
<import prefix="nmes"
ns="http:/www.hp.com/usage/nme/nmeschema/ExampleNamespace" />
<transformer id="NMEToNormalizedMeteredEvent" from="nmes:FooNME"
to="attr:NormalizedMeteredEvent">
<transform input="name.fullName" output="name">
<use-transformer from="string" to="attr:StringAttribute"
class="attrtx:StringToStringAttribute" />
</transform>
...
</transformer>

Script Transformations
Simple and Composite transformations provide comprehensive transformation capabilities to a
wide variety of use cases, and in most cases is sufficient for implementing all required
transformation scenarios. However, when more flexibility is required, using Script Transformations
could be an option. A Script Transformation is a transformation adapter that allows integrating a
general scripting runtime into the CCF Transformation Layer, and executing external scripts in the
context of complex transformation trees. Such a transformation is similar to any regular simple
transformation, except it would run a script inside for its transformation logic.

Script Transformations utilize an external scripting engine to compile and run transformation
scripts, while providing necessary context for the script. As part of such a context, the following
artifacts are injected into the script:

l inputAn input value to be transformed.


l outputAn output value to be transformed to, which would only be available for mutable
objects.
l beanFactoryA factory to instantiate new structures.
l transformation parametersThese are the same as for Simple Transformations described in
"Simple Transformations" (on page 196), except that the values injected into the script will be
accessible by the parameter name and the value type will always be of the string type.
To achieve maximum performance and consistency with data transformation operations, all scripts
executed by the CCF Transformations Layer are compiled according to strong data typing
constraints. The use of loose dynamic typing, however, is not allowed in the context of data
transformations. When creating Script Transformations, they must conform to the MVEL or Groovy
scripting languages (MVEL 2.0 and Groovy 1.8), and comply with strong typing requirements. See
http://mvel.codehaus.org/ for more information on MVEL, and http://groovy.codehaus.org/ for more
information on Groovy.

The below is an example of how a Simple Script transformer can be defined to convert a date-
formatted string into milliseconds:

Page 247 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<transformer id="DateStringToLongScript"
from="string" to="long">
<script language="MVEL">
<args>
<arg name="format" type="string"
default="..."
description="..." />
</args>
<![CDATA[ //MVEL script
java.text.SimpleDateFormat inputFormat =
new java.text.SimpleDateFormat(format);
java.util.Date date = inputFormat.parse(input);
return date.getTime();
]] //CDATA end tag removed here for format reasons
</script>
</transformer>

Moreover, the IUMStudio can also compile and validate such Script Transformations written in
Groovy or MVEL. Scripts can also be combined with other Simple or Composite Transformations as
shown in the following example:
<import prefix="tx" ns="http://www.hp.com/usage/datastruct/transform"
/>
...
<transformer id="DecimalStringToHexStringScript"
from="string" to="string">
<chain>
<use-transformer from="string" to="long"
class="tx:DecimalStringToLong"/>
<transformer from="long" to="string">
<script language="MVEL">
<![CDATA[ //MVEL script
return Long.toHexString(input);
]] //CDATA end tag removed here for format reasons
</script>
</transformer>
</chain>
</transformer>

Scripts provide a powerful extension mechanism to Structural Transformations, which can be used
for implementing a partial or full Structural Transformation as demonstrated below:
<import prefix="nmes"
ns="http://www.hp.com/usage/nme/nmeschema/ExampleNmes" />
<transformer id="MyNmeAToNmeB" from="nmes:NmeA" to="nmes:NmeB">
<script language="MVEL">
<![CDATA[ //MVEL script
// "input" has type NmeA and "output" is pre-created and has type
NmeB
if(input.attrA1Flag) {
output.attrB1 = input.attrA1;
}

Page 248 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

if(input.attrA2Flag && input.attrA2.attrA21Flag) {


output.attrB2 = Integer.toString(input.attrA2.attrA21);
}
if(input.attrA2Flag && input.attrA2.attrA22Flag) {
if (output.attrB3Flat) {
output.attrB3.attrB31 = Long.toHexString
(input.attrA2.attrA22);
}
}
]] //CDATA end tag removed for format reasons
</script>
</transformer>

CCF: The Schema Layer


The Schema Layer in CCF is responsible for producing the NMEschema, using XSD files (XML Schema
Description: *.xsd). XSD is an XML format for describing the structure of XML documents. For CCF,the
XSDfiles describe the NMESchema used by eIUMand its components.

Language Overview
Schema layer definitions are defined using two language formats:

l Beans Schema Language


l SNME Schema Language
The Beans Schema language is intended for defining and modeling data structures of differing
complexity. The Xml Schema Document (XSD) is the basis for this language in CCF. Whereas the XSD
format is very extensive in its own right, the Beans Schema language inherits only a subset of
essential XSD features. In regards to the SNME Schema Language, the Structured Normalized
Metered Event (SNME) is a data abstraction used in eIUM for modeling data structures. The SNME
data structures are defined by means of NME types, and to define such types there are two
languages available in eIUMthat serve this purpose: the eIUM configuration language (found in
eIUMconfiguration files, examples of which can be manipulated in the eIUMLaunchPad, and which
are listed in the eIUMComponent Reference), and the SNME Schema (XML) language. The SNME
Schema language serves as an additional language to complement the configuration language and
become standard for SNME data model definitions in eIUM.

For a description of the Beans and SNME Schema language elements and syntax, see "Beans
Schema Syntax Reference (XSD)" (on page 249) and "SNMESchema Syntax Reference (XSD)" (on
page 256).

Beans Schema Syntax Reference (XSD)


The following is a description of the XML elements which form the Beans Schema language. The
essential elements which represent data types and data fields have individual descriptions, while
descriptions for other elements are provided inline.

<schema>

At the top of each types definition there is a schema XML element which encloses all other possible
elements. At the schema element, the target namespace for types in that definition and the alias

Page 249 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

for other namespaces are specified as attributes.

Syntax
<schema
Xmlns=xs:string
targetNamespace=xs:string
xmlns:namespace-alias*=xs:string>
(annotation?,
import*,
complexType*,
simpleType*)
</schema>

Attributes

Attribute Name Description

xmlns Specifies the namespace for all XML elements the schema contains. Usually
"http://www.w3.org/2001/XMLSchema".

targetNamespace Specifies the namespace which all types defined in this schema belong to.

xmlns:namespace- Declares the imported namespace so it can refer types from that namespace
alias in the local schema.

For example, the XSD namespace declared as


xmlns:xs="http://www.w3.org/2001/XMLSchema" allows referring to built-in
XSD types with qualified names: {xs:string, xs:int, xs:double, xs:date, }.

Elements

Element
Name Description

import Together with xmlns:namespace-alias, used to import another namespaces to


make it possible to refer types from those namespaces in local types and fields. In
general, this is rather auxiliary information and just assists the information from
xmlns:namespace-alias.

This element has only one attribute namespace to specify the imported
namespace by name.

complexType Used to define actual data structures.

simpleType Can be used as an alias for already-existing primitive types, or to specify special
constraints on values of those types.

annotation Used to provide information relevant to the application and documentation


information as well.

<complexType>

The primary concept in the Beans Schema language is a data structure or data type represented by
XSD <complexType>. Such data structures can define their own set of data fields using an XSD

Page 250 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

element, and extend other complexType structures as well.

Syntax
<complexType name=xs:string>
(annotation?,
sequence(element*)|
complexContent(extension(sequence(element*)))
</complexType>

Attributes

Attribute
Name Description

Name Name of the type defined by this complexType. The qualified type name will consist of
this name and targetNamespace defined in the enclosing schema element.

Elements

Element Name Description

element Represents a data structure field.

sequence This element is used as a scope for the elements list.

complexContent Together with the extension element, used to specify a data type to extend.

extension Enclosed within the complexContent element and used to specify a data type to
extend. Has a name attribute to specify the name of the data type to extend.

annotation Used to provide information relevant to application area and documentation


information as well.

<element>

Represents a data structure field.

Syntax
<element
name=xs:string
type?=xs:string
minOccurs?=xs:int
maxOccurs?=xs:int>
(annotation?,
(complexType|simpleType)?)
</element>

Attributes

Page 251 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attribute
Name Description

name Name of the field defined by this element.

type The type of the field. Can be qualified and represented by the namespace and type
name from that namespace, or can be a simple type name if the type is from a local
namespace.

minOccurs Specifies whether this field is required or not. The default value is the same as in the
XSD specification and minOccurs=1 (required). Cannot be used when the value
element of an array is defined.

maxOccurs Specifies the maximum number of times an element can occur. The default value is
the same as in the XSD specification and maxOccurs=1.

For cases when the value element of an array is defined, the value of maxOccurs
should be >=1, or unbounded. In other cases, maxOccurs of another value other
than the default will be ignored.

Elements

Element
Name Description

annotation Used to provide documentation as well as other information relevant to the


application.

complexType Anonymous complexType definition.

simpleType Anonymous simpleType definition.

<simpleType>

Another important concept is a primitive type definition. It is possible to define your own set of
primitive types which correlate with the application domain model, and use those types for data
modeling in that domain. The primitive type is defined by the XSD simpleType.

Syntax
<simpleType name=xs:string>
(annotation?,
restriction,
(enumeration,
fractionDigits,
length,
maxExclusive,
maxInclusive,
maxLength,
minExclusive,
minInclusive,
minLength,
pattern,
totalDigits,

Page 252 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

whitespace)?)
</simpleType>

Attributes

Attribute
Name Description

name Name of the type defined by this simpleType. The qualified type name consists of this
name, and targetNamespace defined in the enclosing schema element.

Elements

Element Name Description

restriction Specifies a base type for this simpleType.

enumeration, fractionDigits, Standard XSD facets enclosed within the restriction element
length, maxExclusive, and used to define restrictions on XML elements. Refer to
maxInclusive, maxLength, the standard XSD documentation for details
minExclusive, minInclusive, (http://www.w3.org/TR/2004/REC-xmlschema-2-
minLength, pattern, totalDigits, 20041028/datatypes.html#rf-facets).
whiteSpace

annotation Provides information relevant to the application and


documentation information as well.

<annotation>

The annotation element represents information similar to Java annotations, and can be used for
documentation purposes as well.

Syntax
<annotation>
(documentation?, appInfo*)
</element>

Elements

Element Name Description

appInfo Information relevant to the application.

documentation Documentation for the enclosing element.

<appInfo>

The appInfo element represents information relevant to the certain application area which is
specified by the "name" attribute. The format of enclosed data depends on the application area and
can vary.

Syntax

Page 253 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<appInfo name=xs:string any*>


(any*)
</appInfo>

Attributes

Attribute Name Description

name Identifies the application area of the enclosed data.

Elements

Element Name Description

any Any elements as required by the application context.

Arrays Definition

There are two ways in the Beans Schema Language that array types can be defined. These
approaches use similar but somewhat different notation from the XSD language perspective.

Dedicated Named Array Type

<complexType name="PersonInfoArray">
<sequence>
<element name="value" type="tns:PersonInfo" maxOccurs="unbounded"/>
</sequence>
</complexType>

In this approach, the dedicated data type is defined as a named <complexType>, which can be
referenced from other places. The defined data type must enclose only one element representing a
data field. The elements name should be value, and the elements type should correspond to the
type of array elements. This element is only allowed to have the maxOccurs attribute to specify the
array size.

Anonymous Array Type

<element name="phoneNumbers">
<complexType>
<sequence>
<element name="value" type="xs:string" maxOccurs="unbounded"/>
</sequence>
</complexType>
</element>

The anonymous type is the type that is defined inline, and which has no name and thus cannot be
referenced from other places (rather than from where it is defined). This is the only difference
compared to the dedicated named array type approach.

Page 254 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Beans Schema Language Usage Example

The following is an example of Beans Schema Language usage, where a simple phonebook
containing different persons' information is modeled. In the sample below, PhoneBook.xsd defines
the PhoneBook data structure, which has owner, description, and contacts fields. The contacts field
points to an array of PersonInfo structures, defined in the same namespace but in another
PersonInfo.xsd file.
<?xml version="1.0" encoding="UTF-8"?>
<schema
targetNamespace="http://www.hp.com/usage/datastruct/beans/contacts"
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://www.hp.com/usage/datastruct/beans/contacts">

<complexType name="PhoneBook">
<sequence>
<element name="owner" type="xs:string"/>
<element name="description" type="xs:string/>
<element name="contacts" type="tns:PersonInfoArray"/>
</sequence>
</complexType>
<complexType name="PersonInfoArray">
<sequence>
<element name="value" type="tns:PersonInfo" maxOccurs="unbounded"/>
</sequence>
</complexType>
</schema>

The PersonInfo.xsd sample below defines the PersonInfo data structure, which contains such
information as first name, last name, and phone numbers. FullPersonInfo extends PersonInfo and
adds optional information about the persons address and date of birth. The Address data structure
describing address information is defined in a different namespace, which is imported by means of
the xmlns:loc namespace-alias declaration and import element.
<?xml version="1.0" encoding="UTF-8"?>
<schema
targetNamespace="http://www.hp.com/usage/datastruct/beans/contacts"
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:loc=" http://www.hp.com/usage/datastruct/beans/location""
xmlns:tns="http://www.hp.com/usage/datastruct/beans/contacts">

<import
namespace="http://www.hp.com/usage/datastruct/beans/location"/>
<complexType name="PersonInfo">
<sequence>
<element name="firstName" type="xs:string" />
<element name="lastName" type="xs:string" />
<element name="phoneNumbers">
<complexType>

Page 255 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<sequence>
<element name="value" type="xs:string" maxOccurs="unbounded"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
<complexType name="FullPersonInfo">
<complexContent>
<extension base="tns:PersonInfo">
<sequence>
<element name="address" type="loc:Address" minOccurs="0"/>
<element name="dateOfBirth" type="xs:string" minOccurs="0"/>
</sequence>
</extension>
</complexContent>
</complexType>
</schema>

In the Address.xsd sample below, the Address data structure is defined. In addition to the Address
structure, the simple type ZipCode is defined, which is used as an alias for the xs:string type.
<?xml version="1.0" encoding="UTF-8"?>
<schema
targetNamespace="http://www.hp.com/usage/datastruct/beans/location"
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://www.hp.com/usage/datastruct/beans/location"

xmlns:tags="http://www.hp.com/usage/datastruct/ext/annotation/persistence">

<complexType name="Address">
<sequence>
<element name="address" type="xs:string" />
</element>
<element name="country" type="xs:string">
</element>
<element name="zipCode" type="tns:ZipCode" minOccurs="0" />
</sequence>
</complexType>
<simpleType name="ZipCode">
<restriction base="xs:string" />
</simpleType>
</schema>

SNMESchema Syntax Reference (XSD)


The following are SNME features which are used extensively in eIUM when modeling different data
structures:

Page 256 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

l Native support to represent structured data and optional attributes. Structured data is
referenced as NME type.
l One-dimensional array support for primitive types, string type, and NME types.
l Namespace-aware schema to define the fields of data records.
l Finite set of primitive data types that can be used to define the fields in a data record:
Primitive Data Types
Primitive Data Type Description

boolean True or false logical values.

byte 8-bit numeric values.

short 16-bit numeric values.

int 32-bit numeric values.

char 8-bit character values.

long 64-bit numeric values.

float 32-bit floating-point numeric values.

double 64-bit floating-point numeric values.

string Sequence of character values.

<schema>

At the top of each SNME Schema XSD definition, there is a <schema> XML element which encloses all
other possible elements corresponding to modeled NME types, or which assist the modeling. At the
<schema> target, the namespace for NME types in that definition and the alias for other
namespaces are specified as attributes.

Syntax
<schema
xmlns = xs:string
targetNamespace = xs:string
xmlns:namespace-alias* = xs:string>
(annotation?,
import*,
complexType*,
simpleType*)
</schema>

Attributes

Attribute Name Description

xmlns Specifies the namespace for all XML elements the schema contains. Usually
"http://www.w3.org/2001/XMLSchema".

Page 257 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attribute Name Description

targetNamespace Specifies the namespace which all NME types defined in this schema belong
to.

xmlns:namespace- Declares the imported namespace so it can refer to types from that
alias namespace in the local schema. The most essential namespace is
"http://www.hp.com/usage/nme/nmeschema", since it contains the definition
for the SNME built-in primitive types and aliases, which are to be used in
SNME Schema XSD definitions instead of XSD built-in types from
"http://www.w3.org/2001/XMLSchema".

Elements

Element
Name Description

import Together with xmlns:namespace-alias, used to import another namespace to


make it possible to refer to NME types from those namespaces in local types and
fields. In general, this is rather auxiliary information and just assists the
information from xmlns:namespace-alias. This element has only one attribute
namespace to specify the imported namespace by name.

complexType Represents the NME type definition. Can be used also to define an alias for an
existing NME type, as well as for another NME type alias.

simpleType Represents an alias for one of the SNME primitive built-in types. Can be used to
define an alias for existing primitive type aliases as well.

annotation Used to provide documentation about the defined SNME namespace and enclosed
NME types in general.

<complexType>

This <complexType> element is used to define the NME type and its attributes. It can be used to also
define an alias for another NME type.

Syntax
<complexType name = xs:string>
(annotation?,
sequence(element*)|
complexContent(extension))
</complexType>

Attributes

Attribute
Name Description

name Name of the NME type defined by this complexType element. The qualified NME type
name will consist of this name and the targetNamespace defined in the enclosing
schema element.

Page 258 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Elements

Element Name Description

element Represents an attribute which belongs to the NME type to be defined.

sequence This element is used as a scope for the elements list.

complexContent Together with the extension element, used to specify the NME type when the
alias for another NME type is defined.

extension Enclosed within the complexContent element and used to specify the NME type
when the alias for that type is defined. Has the name attribute to specify the
name of the target NME type.

annotation Used to provide information about the NME type represented by this
complexType element.

<element>

The element represents an attribute which belongs to the enclosing NME type.

Syntax
<element
name = xs:string
type? = xs:string
minOccurs? = xs:int
maxOccurs? = xs:int>
(annotation?,
complexType?)
</element>

Attributes

Attribute
Name Description

name Name of the NME types attribute defined by this element.

type The type of the attribute. Can be qualified and represented by the namespace and
name of the NME type from that namespace, or it can be the simple type name if the
type comes from the local namespace.

This attribute can be absent if the NME type is defined as an enclosed anonymous
type. This is the case when it is an attribute of the NME array type.

minOccurs Specifies whether this attribute is required or optional. Default value is minOccurs=1
(required). Cannot be used when the value element of the NME array is defined.

maxOccurs Specifies the maximum number of times an attribute can occur. Default value is
maxOccurs=1. For cases when the value attribute of the NME array is defined, the
value of maxOccurs must be unbounded. Values other than 1 (the default) and
unbounded (for array) are not supported.

Elements

Page 259 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Element
Name Description

annotation Used to provide documentation about the attribute represented by the enclosing
<element>.

complexType Anonymous complexType definition. Can be used for defining the attributes type
if the type is an NME array.

<simpleType>

This element is used for the SNMEs alias feature. Since an SNME has a fixed number of primitive
types, it is possible only to assign an alias for them, which can be accomplished using the
<simpleType> element.

Syntax
<simpleType name=xs:string>
(annotation?,
restriction)
</simpleType>

Attributes

Attribute
Name Description

name Name of the alias defined by this simpleType. The qualified alias name will consist of
this name and the targetNamespace defined in the enclosing schema element.

Elements

Element
Name Description

restriction Specifies a target SNME built-in primitive type for this alias, or another existing
alias of the same kind.

Annotation Used to provide documentation about the alias represented by the enclosing
<simpleType>.

SNME Primitive Types Support

The built-in primitive types and predefined aliases of SNMEs are supported in the form of a
separate BuiltinTypes.xsd definition, with the targetNamespace as
"http://www.hp.com/usage/nme/nmeschema".

Primitive Data Type


Primitive Data Type SNME Schema XSD type(*)

boolean nme:boolean

Page 260 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Primitive Data Type SNME Schema XSD type(*)

boolean[] nme:boolean-array

byte nme:byte

byte[] nme:byte-array

short nme:short

short[] nme:short-array

int nme:int

int[] nme:int-array

char nme:char

char[] nme:char-array

Long nme:long

long[] nme:long-array

float nme:float

float[] nme:float-array

Double nme:double

double[] nme:double-array

String nme:string

string[] nme:string-array

(*) it is assumed that the following namespace-alias is defined:


xmlns:nme="http://www.hp.com/usage/nme/nmeschema"

and the corresponding import element is included as well:


<import namespace="http://www.hp.com/usage/nme/nmeschema"/>

Arrays Support

According to SNME requirements, the SNME Schema XSD language should support one-dimensional
array definitions for primitive and NME types, as well as aliases for them.

Primitive Types Array

See the primitive types arrays discussed in "SNME Primitive Types Support" (on page 260), together
with primitive types.

NME Array Type

The NME array definition can be accomplished in the following methods:

Page 261 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Dedicated named NME array type


<complexType name="ProxyInfoArray">
<sequence>
<element type="CreditControlApplication:ProxyInfo"
name="value" maxOccurs="unbounded" />
</sequence>
</complexType>

In this approach, the dedicated NME type is defined as a named complexType, which can be
referenced from other places. The defined NME type must contain one value attribute. The type of
the value should correspond to the NME type of the array elements. The maxOccurs attribute of
the unbounded value must be present as well to specify array size.

Anonymous NME array type


<element name="proxyInfoArray">
<complexType>
<sequence>
<element type="CreditControlApplication:ProxyInfo"
name="value" maxOccurs="unbounded"/>
</sequence>
</complexType>
</element>

The anonymous NME array type is the type that is defined inline, and which has no name and thus
cannot be referenced from other places versus from where it is defined. This is the only difference
as compared with the dedicated named NME array type approach.

SNME Declaration Language Examples

The following are language examples defined using the eIUMconfiguration language, and the SNME
Schema XSD declaration language.

eIUM Configuration

This approach assumes that the schema is defined via eIUM configuration, similar to the properties
format.

Sample 1: SNMESchema_Diameter_CCA.config
[/SNMESchema]

[/SNMESchema/CreditControlApplication]

[/SNMESchema/CreditControlApplication/CreditControlMessage]
Attributes=request,CreditControlApplication:CCRType,optional
Attributes=answer,CreditControlApplication:CCAType,optional

[/SNMESchema/CreditControlApplication/CCAType]
Attributes=session_Id,string
Attributes=result_Code,long
Attributes=origin_Host,string
Attributes=origin_Realm,string

Page 262 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Attributes=auth_Application_Id,long

Attributes=proxy_Info,CreditControlApplication:Proxy_Info[],optional
Attributes=route_Record,string[],optional

[/SNMESchema/CreditControlApplication/CCRType]
Attributes=session_Id,string
Attributes=origin_Host,string
Attributes=origin_Realm,string
Attributes=dest_Realm,string
Attributes=auth_Application_Id,long

Attributes=subscription_Id,CreditControlApplication:Subscription_
Id[],optional

SNME Schema XSD Declaration Language

For CreditControlMessage.xsd:
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="../usage/nme/nmeschema/CreditControlApplication"
xmlns:tns="../usage/nme/nmeschema/CreditControlApplication">

<complexType name="CreditControlMessage">
<sequence>
<element name="request" type="tns:CCRType" />
<element name="answer" type="tns:CCAType" />
</sequence>
</complexType>
</schema>

For CCAType.xsd:
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="../usage/nme/nmeschema/CreditControlApplication"
xmlns:tns="../usage/nme/nmeschema/CreditControlApplication"
xmlns:nme="../usage/nme/nmeschema">

<import namespace="http://www.hp.com/usage/nme/nmeschema"/>
<complexType name="CCAType">
<sequence>
<element name="sessionId" type="nme:string" />
<element name="resultCode" type="nme:long" />
<element name="originHost" type="nme:string" />
<element name="originRealm" type="nme:string" />
<element name="authApplication_Id" type="nme:long" />
<element name="routeRecord" type="nme:string-array"
minOccurs="0" />
<element name="proxyInfo" minOccurs="0">
<complexType>

Page 263 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<sequence>
<element type="CreditControlApplication:ProxyInfo"
name="value" maxOccurs="unbounded" />
</sequence>
</complexType>
</element>
</sequence>
</complexType>
</schema>

For CCRType.xsd:
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="../usage/nme/nmeschema/CreditControlApplication"
xmlns:tns="../usage/nme/nmeschema/CreditControlApplication"
xmlns:nme="../usage/nme/nmeschema">

<import namespace="http://www.hp.com/usage/nme/nmeschema"/>
<complexType name="CCRType">
<sequence>
<element name="session_Id" type="nme:string" />
<element name="origin_Host" type="nme:string" />
<element name="origin_Realm" type="nme:string" />
<element name="dest_Realm" type="nme:string" />
<element name="auth_Application_Id" type="nme:long" />
<element name="service_Context_Id" type="nme:string" />
<element name="subscription_Id" minOccurs="0">
<complexType>
<sequence>
<element name="value" type="tns:Subsription_Id"
maxOccurs="unbounded" />
</sequence>
</complexType>
</element>
</sequence>
</complexType>
</schema>

Extending Beans and NME XSD Types

The Beans and NME XSD language allows extending structural data types using the <extension>
element. For example:
<complexType name=AddressBase>
<sequence>
<element name=state type=xs:string />
<element name=city type=xs:string />
</sequence>
</complexType>

<complexType name=AddressExt>

Page 264 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

<complexContent>
<extension base=this:AddressBase>
<sequence>
<element name=street type=xs:string />
<element name=building type=xs:string />
</sequence>
</extension>
</complexContent>
</complexType>

The resulting AddressExt Bean type has 4 fields: state, city, street, and building. The NME XSD
language also allows extending NME types using the <extension> element. Yet unlike Beans types,
the NME types are not polymorphous and not compatible, that is, base type attribute cannot be
assigned with the value of the child type. For example:
<complexType name=BaseCDR>
<sequence>
<element name=f1 type=nme:int />
<element name=f2 type=nme:long />
</sequence>
</complexType>

<complexType name=RecordCDR>
<complexContent>
<extension base=this: BaseCDR>
<sequence>
<element name=f3 type=nme:string />
<element name=f4 type=nme:string />
</sequence>
</extension>
</complexContent>
</complexType>

The resulting RecordCDR NME type has 4 attributes: f1, f2, f3 and f4.

Defining Beans and NME Types Documentation

The Beans and NME XSD language provides support for standard XSD documentation. The following
XML listing below shows an example that for namespaces, types, and fields, documentation
definitions can be supplied using the <annotation> and <documentation> elements.
<complexType name="OneOfCIBERRecord">
<annotation>
<documentation>
This is the type description.
</documentation>
</annotation>
<sequence>
<element name="record" type="CIBERTypes:CIBERRecord">
<annotation>
<documentation>
This is the record field description.

Page 265 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

</documentation>
</annotation>
</element>
<element name="separatorChar" type="nme:string"/>
<element name="recordType" type="nme:int"/>
</sequence>
</complexType>

Schema Definitions in IUM Studio


Unlike the Design views of Format Definition (XFD) and Transformation Definition files (TX/XML), the
XSD Schema view is a more basic graphical editor and viewer of an XSD file on the Design tab, with
changes reflected in the Source tab (and vice versa). The following figures show the XSD file view in
IUM Studio (Source and Design tab views, respectively).

Page 266 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

On the Design tab, you can edit the XSDfile in the graphical view by double-clicking any of the
<sequence> elements from the corresponding box (which itself refers to the <complexType>
element. This drills down to a further graphical editor for the <sequence> elements.

Page 267 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

From here, each box represents the <complexType>'s <sequence> and each nested <element>,
where you can edit the <element> fields by left-clicking and editing the text (corresponding to the
"name" attribute).

Page 268 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Furthermore, you can also change the value of the "type" attribute by selecting a different value
from the drop-down list.

Page 269 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

In this view you can also rearrange the order of any <element> in the given <sequence> by left-
clicking and dragging the element up or down the sequence box. You can also click the "int" and
"string" boxes to change boxes to switch the definition of the int and string types.

On any of the Design tab views, select the box representing a corresponding <import>,
<complexType>, <sequence>, or <element>, and then right-click to open a contextual menu where
you can add or insert additional tags.

Page 270 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

Page 271 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

When finished with any edits, click the back button in the upper left corner to return to the main
Design tab view.

CCF Components
As mentioned earlier, the Codec Layer of CCF is exposed through the eIUMcollector by using the
following main components:

l CCFFileEncapsulator
l CCFFileWriterFactory
CCFFileEncapsulator is an encapsulator that uses CCF for decoding input data into NMEs, and
generates NMEAdapters (such as CCFFileWriterFactory). The format of the input data is defined by
format definition (XFD) descriptors, described in "XFD Syntax Reference" (on page 157). Meanwhile,
the mapping of decoded data into NMEs is defined by the Transformations descriptors, described in
"Transformations Syntax Reference (TX)" (on page 199). The CCFFileEncapsulator can decode all
supported data formats (text, raw binary, and XML). The XFD and Transformations descriptors can
be referenced as a file-system URI or an absolute path to the eIUM repository server (see the

Page 272 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

FormatDefinitionFile and MapDefinitionFile attributes of the component). CCFFileEncapsulator allows


CCF to be controlled by means of its own configuration (decoder buffer size, and so on).

CCFFileWriterFactory is a part of a datastore that uses CCF for encoding NMEs and placing them
into storage in the required format (storing the structured NMEs into binary or text files). The
output format is defined by the XFD descriptor. Meanwhile, the Transformations descriptors specify
which NME attributes are to be encoded and stored. CCFFileWriterFactory can store NMEs in all data
formats (text, raw binary, and XML). The XFD and Transformations descriptors can be referenced as
file-system URI or absolute path to the eIUMrepository server (see the FormatDefinitionFile and
MapDefinitionFile attributes of the component). CCFFileWriterFactory allows CCF to be controlled by
means of its own configuration as well.

Meanwhile, the NMESchemaLoader component's configuration can be updated to point to the


Schema Definition (XSD) in the repository.

The TransformationRule component works with NormalizedMeteredEvents ("flat"/standard NMEs)


and NMEAdapters and transforms attribute values between them according to the specified
transformation. The incoming NME to this rule must be an NMEAdapter or a standard NME. You can
use this rule to transform or just copy data values between structured and standard NME parts. See
"Transformations and Business Rules: The TransformationRule" (on page 197) for more information.

The RepositoryContentLoader component can be referenced from a collector and session server
configuration, allowing the loading of content (that is, directory paths) from the repository server
into a collector's %VARROOT% directory at run time.

For additional information about these components, to include their attributes and sample
configurations, see the eIUMComponent Reference.

Usage Scenarios
The following are some typical usage scenario descriptions:

You have an NME Schema and need to create the XFD format, and then link it with the NME Schema by
Transformations.

Page 273 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

The overall process flow is the following:

1. Launch IUM Studio and perform a repository server checkout. As a result, content from the
repository server will be checked out into
"%VARROOT%/apps/IUMStudio/RepositoryServer/datamodel", and a new project will be created
and displayed in the "Checked out projects" pane. The project contains the required
datamodel/nme/3GPP/3GPP-model.xsd file, for example, which is stored locally.
2. Select File -> New -> XFD File to create the new (XFD) Format Definition File, located in
%VARROOT%/apps/IUMStudio/RepositoryServer/datamodel/format/CDR-format.xfd, and
modify it with the Format Definition Editor for XFD files.
3. Select File -> New -> Transformation to create the new Transformations definition file, located
in %VARROOT%/apps/IUMStudio/RepositoryServer/datamodel/transform/CDR-transform.xml,
to map the format from CDR-format.xfd into the required NME type defined in 3GPP-
model.xsd. The created CDR-transform.xml file will contain references to the format and the
NMEType. You can then edit the CDR-transform.xml file in the Transformations Editor.
4. Use IUM Studio to validate the files and check them into the repository. IUM Studio validates
CDR-format.xfd, CDR-transform.xml, and 3GPP-model.xsd, and then uploads all the new
files into the repository server so they can be used by other eIUMprocesses.
5. Run the Launchpad and create a new collector using the CCFFileEncapsulator.

Page 274 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

6. Update the configuration of CCFFileEncapsulator with links to the format and transformations
files in the repository: ccf/format/CDR-format.xml and ccf/transform/CDR-transform.xml.
The link to datamodel/nme/3GTPP/3GPP-model.xsd should be configured as a part of the
collectors NMESchemaLoader configuration. For more information on these components, see
the eIUM Component Reference.

NOTE: You can also use the XFD to XSD conversion wizard to achieve these steps. See "Using
the XFD-to-XSD Wizard" (on page 190) for more information.

You have an external XML definition and need to create the Format Definition (XFD), Transformations
(TX), and XSD descriptors.

The overall process flow is the following:

1. After launching IUMStudio, execute the xfdtool's xsd2xfd command to convert the given
Mobile.xsd definition into XFD format and create as output Mobile-format.xfd.
2. Launch IUM Studio and perform a repository server checkout. As a result, content from the
repository server will be checked out into
"%VARROOT%/apps/IUMStudio/RepositoryServer/datamodel", and a new project will be created
and displayed in the "Checked out projects" pane. The project will contain "datamodel/beans",
datamodel/nme, datamodel/format and datamodel/transform directories created locally.
3. Copy the Mobile-format.xfd created by xfdtool into the
%VARROOT%/apps/IUMStudio/RepositoryServer/datamodel/format folder in IUM Studio.
Launch the File -> New -> Convert XFD to XSD wizard to generate default Transformations and
XSD descriptors for datamodel/format/Mobile-format.xfd.

Page 275 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

4. The resulting datamodel/nme/Mobile/Mobile-model.xsd and datamodel/transform/Mobile-


transform.tx files would describe NMEs and corresponding transformations that completely
match the original XML (input) schema.
5. Use IUM Studio to validate the files and check them into the repository. IUM Studio validates
Mobile-format.xfd, Mobile-transform.tx, and Mobile-model.xsd, and then uploads the new
material into the repository server.
6. Run the Launchpad and create a new collector using the CCFFileEncapsulator.
7. Update the configuration of CCFFileEncapsulator with links to the format and transformations
files in the repository: ccf/format/Mobile-format.xml and ccf/transform/Mobile-
transform.xml. The link to datamodel/nme/Mobile/Mobile-model.xsd should be configured as
a part of a collectors NMESchemaLoader configuration. For more information on these
components, see the IUM Component Reference.
You possess the whole suite of XFD, XSD, and matching transformation files. The format has been
changed, and therefore changes are needed to the Format Definition (XFD).

The overall process flow is the following:

1. Launch IUM Studio and perform a repository server checkout. As a result, content from the
repository server will be checked out into
"%VARROOT%/apps/IUMStudio/RepositoryServer/datamodel", and a new project will be created
and displayed in the "Checked out projects" pane. For example, the project will contain the
following files created locally: datamodel/nme/GatewayLog/GatewayLog-model.xsd,
datamodel/format/GatewayLog-format.xfd, and datamodel/transform/GatewayLog-
transform.xml.
2. Modify the GatewayLog-format.xfd file using the Format Definition Editor for XFD files.

Page 276 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 11: Using the Common Codec Framework: Structuring and Transforming your Data

3. Use IUM Studio to validate the files and check them into the repository. IUM Studio validates the
GatewayLog-format.xfd, GatewayLog-transform.xml and GatewayLog-model.xsd files.
4. Validation would fail if changes in the XFD have impact on Transformations, which would need
corresponding changes as well. In this case, IUM Studio will report the issue in the Errors Log"
and "Problems" views with detailed descriptions for expected parsing.
5. In case the NME Schema changes during any transformations adjustments, IUM Studio displays
the appropriate warning that the business logic rules may require corresponding changes too.
6. When validation is completed without errors, IUM Studio then uploads the updated materials
into the repository server, which can be accessed by other IUMcomponents as necessary to use
the new changes.
Also see "Format Definitions in IUMStudio" (on page 166) and "Transformation Definitions in
IUMStudio" (on page 225) for more information on working with the IUM Studio Format Definitions
and Transformation Definitions interfaces, respectively.

Page 277 of 332 HP eIUM (8.0)


Chapter 12

Creating Reports
eIUM includes integrated web-based application reporting that provides valuable insight into usage
data in your eIUM deployment. This topic series provides an overview of the reporting web
application, describes its key features, and provides guidelines for using it effectively.

Overview 278

Reporting Components 279

Report Types 279

Report Parameters 280

Basic Reporting 280

Before You Begin 280

Start the Web Application Server 281

Create a Report Collector using LaunchPad 281

Create a Report Collector using the Reporting Wizard 282

Create a Report 284

Run Reports 286

Advanced Reporting 287

Architecture 287

Change the Web Application Server Start-Up Configuration 289

Customize the Reporting Interface 289

Overview
The eIUM Reporting web application enables you to create reports based on data in your eIUM
deployment quickly and easily. Using its web-based interface, you can create most of the common
tabular or graphical reports without having to purchase an expensive third-party database or
reporting package.

The reporting web application enables you to perform the following operations:

l Generate, modify, and delete several types of reports


l Generate reports in tabular and graphical formats
l Generate reports for hourly, daily, weekly, or monthly time periods
l View reports online and as hard copy
l Export report data

Page 278 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

Reporting Components
The eIUM Reporting web application consists of three major components:

l Report Collector: Obtains data from eIUM collectors and stores the report data in a database.
eIUM Reporting provides two templates of pre-configured report collectors:
n Report Simple: Generates reports over a single specified time interval. You might use this

template to create a report collector that reports every hour.


n Report Advanced: Generates multiple reports over several time periods. You might use this
template to create a report collector that generates reports every hour, every four hours, and
every eight hours.
l eIUM Reporting Web Application: Simplifies the task of creating a report collector, generating
report templates, and running reports.
l Web Application Server: Obtains data from report collectors and prepares the reports for display.
The web application server obtains a list of the report collectors and their particulars from the
eIUM Configuration Server.

Report Types
eIUM reporting enables you to create several types of reports:

l Summary: Displays the values of a selected parameter sorted in the order you specify. A table of
this report type has at least three columns: a time column, a parameter (dimension) column, and
a measure (sum) column. The report calculates the measure and sorts the results. You might
generate a report of this type to view the top ten source IP addresses for the day.
l Multiple Parameter Summary: Displays the values of several parameters in a single graph. You
can have as many parameters as you want. But as the number of parameters increases, so does
the database size, and consequently the time for queries and the amount of storage space
needed. A table in this report type has at least three columns: a parameter column, a measure
column, and a time column. You might generate a report of this type to view the top five source-
destination IP pairs based on usage.
l Timeline: Displays the values of a variable at selected time intervals, allowing you to see the
behavior of that variable over time. The complexity of this report depends on the configuration
of the source collector. The report has at least three columns: a variable column, a measure
column, and a time column. To group multiple variables, you might create an NME attribute in
your collector that adds a field to provide the group value. You can then select that value in the
report query. You might generate such a report to view the hourly data flow through a router.
l Multiple Timeline: Displays the values of two or more variables on the same graph. The
complexity of this report depends on the data obtained from the source collector. For example,
you may want to report the values of two NME attributes say, attr1 and attr2and their total.
To do this, you can define the NME of the report collector to consist of attr1, attr2, and
total attributes and use rules in the collector to populate these attribute values. You can then
select the multiple timeline report type to create a report displaying all three values on a single
graph.
l Individual Statement: Displays the raw data for a collector. It does not sum the usage data in
each time interval, as in timeline reports.
l Multiple Parameter: Displays reports with primary and secondary parameters.

Page 279 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

l Multiple Parameter Pie Chart: Displays primary and secondary parameters in a pie chart.
l Data Table: Displays parameters, measures, and time fields in tabular form.

Report Parameters
You can the determine content and customize the appearance of your report by selecting and
specifying the following report parameters:

l Report Title: The title of the report. The report title is also used to name the file containing the
report configuration. Choose the title carefully as report titles that are extremely long may
cause problems and those that contain illegal characters for filenames may not be recoverable.
l Usage Parameters: Usage parameters are NME attributessuch as account number, login ID, IP
address, port number, bytes, or packetsthat constitute a report.
l Usage Measure: A usage parameter (NME attribute), such as NumBytes, whose values can be
summed.
l Time: A time-based NME attribute such as StartTime or EndTime.
l Time Period: The duration that the report covers. For example, one hour or one week.
l Graph Type: The presentation style of the report, such as horizontal bar chart, vertical bar chart,
or pie chart.
l Top N: The number of values of a usage parameter to include in the report. For example, to
generate a report showing the top five Destination IP addresses, specify 5 as the Top N and
Destination IP as the Report Usage Parameter.

NOTE: The list of usage parameters and usage measure depend on the NME attributes available
to the report collector, which depend on the NME defined for the source collector.

Basic Reporting
Using the web-based interface, you can quickly perform the following tasks to obtain a useful report
on the data collected in your deployment:

l Start the Web Application Server


l Create a report collector
l Create a report (template)
l Run a report
You can get a quick, but basic, view of data in your deployment by performing these tasks, but the
web-based reporting tool can provide much more value if you take advantage of its flexibility and
integration with eIUM. Refer to "Advanced Reporting" (on page 287) for details.

Before You Begin


Before you begin, ensure that the following requirements are met:

l eIUM is installed.
l eIUM Reporting web application is activated.

Page 280 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

l At least one collector in your eIUM deployment must have the ability to be queried. That is, it
must be configured with the JDBCDatastore or FileJDBCDatastore component.
Refer to the Installation Guide for instructions on installing and activating the eIUM Reporting web
application.

Start the Web Application Server


To start the web application server (if not already started):

1. In Launchpad, in the deployment pane or the deployment map, select the Web Application
Server (the Web Application Server is on the host where you activated eIUM Reporting).
2. Click Actions -> Start Web App Server.

Create a Report Collector using LaunchPad


A report collector reads aggregated usage data from a collector in your eIUM deployment and
generates reports about that data. The report collector uses the ReportEncapsulator and
ExternalJDBCDatastore components.

To create a report collector:

1. In LaunchPad, select File -> New or click the new collector icon in the toolbar.
2. Select a factory template. The list of pre-configured collector templates is displayed.
3. Select Simple Report Collector to create a report collector that generates reports over a
single specified time interval or select Advanced Report Collector to create a report collector
that generates multiple reports over several time periods.
4. Click Next. The collector setup pane is displayed.

Page 281 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

5. Specify the host on which the report collector should run, the name of the collector, its
description, the name of the source collector, and the name of the source collectors scheme.
6. Verify the default datastore configuration.
7. Click Create.
8. Start the report collector from Launchpad.

Create a Report Collector using the Reporting Wizard


A report collector reads aggregated usage data from a collector in your eIUM deployment and
generates reports about that data. The report collector uses the ReportEncapsulator and
ExternalJDBCDatabase components. To create a report collector:

1. In Launchpad, select Tools -> Web Applications. This launches the web browser and loads the list
of eIUM Web Tools configured for your deployment. If the browser does not launch
automatically, run the browser and enter the URL: http://<hostname:port>/reporting, where
hostname refers to the system on which the eIUM Reporting web application is running, and
port is the default (8159) or the port number you specified during activation.
2. Click New Report Collector at the upper right of the screen. The left pane displays a directory
tree of the hosts in the deployment.
3. Select a collector and click on a scheme. The Report Collector Creation screen is displayed.

Page 282 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

4. In the Report Collector Creation window, specify whether the report collector is for use by eIUM
Reporting or an external reporting package. If you choose to create a report collector for an
external reporting package, the report data is stored as standard SQL types that can be
queried by an external application.
5. Specify a name for the report collector.
6. Specify the usage parameters, usage measure, and time and click Next. The Report Collector
Properties window is displayed.

Page 283 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

7. Select the default values or enter new properties. Refer to the online help for details about
each property.
8. Click Create to create the collector.
9. Start the report collector via Launchpad.

Create a Report
Creating a report involves selecting a report collector, specifying the collector properties and graph
options, and saving them in the form of a template. You can later generate (run) reports based on
this template. To create a report:

1. In Launchpad, select Tools -> Web Applications. This launches the web browser and loads the list
of eIUM Web Tools configured for your deployment. If the browser does not launch
automatically, run the browser and enter the URL: http://<hostname:port>/reporting, where
hostname refers to the system on which the eIUM Reporting web application is running, and
port is the default (8159) or the port number you specified during activation.
2. From the eIUM Reporting home page, click Create. The Collector Selection window is displayed,
listing only report collectors (collectors in your deployment that can generate reports).

Page 284 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

3. In the collector selection window, select a report collector and click Next. The Database Table
and Report Type Selection window is displayed.

4. Select a database table and report type, and click Next. Refer to the online help or "Report
Types" (on page 279) for a description of each report type. The Report Parameters window for
that report type is displayed. This window contains parameter fields specific to the chosen
report type.
5. In the Report Parameter window, specify the parameter values. Refer to the online help or
"Report Parameters" (on page 280) for a description of each parameter.
6. Some reports allow you to customize the graph output. To do so, click Set graph options. The

Page 285 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

Graph Options window is displayed.

7. Specify the graph options.


8. To select the color, type the name of the color or click Select Color to choose a color from a
color map.
9. Click Save.

Run Reports
To populate the report based on your specifications with usage data:

1. From the eIUM Reporting home page, click Run.


2. In the calendar, select a day, a week, or a month. The selected period is highlighted.

3. Optionally, select overrides to the hour or time period.

Page 286 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

4. Click View Report.


5. When the report displays, click Data, Print, or Export Data for additional operations.

Advanced Reporting
The reporting tool can provide valuable insight into the data collected in your eIUM deployment
when you use the tool with an understanding of its architecture and take advantage of its flexibility.
eIUM Reporting, which receives eIUM data as input and produces reports as output, consists of three
major components as shown in the following figure:

Java Servlet
Server
IUM Report
Data Collector IUM Reporting
Web Application

There are two important points to consider:

l The power of the reporting tool lies in its tight integration with eIUM. The reporting tool relies on
collectors to capture data from network elements, reduce this mass of data to a manageable
size, and store the data so that it can be directly used by the reporting tool. The tool takes
advantage of a collectors aggregation capability to save data storage space. It also takes
advantage of the variable flush time of collectors to construct report time spans.
l The flexibility of the reporting tool lies in the report templates that it provides in the form of
report types. By selecting a report type, you can define the manner in which eIUM data is to be
queried. Also, by specifying report parameters, you can define the way in which the results of the
query are to be presented. The web application server uses the Java Server Pages (JSP)
technology to display the report. You can easily customize its look-and-feel by editing the JSP
file.
The following subsections discuss these issues in detail.

Architecture
The architecture of the reporting tool is best illustrated by a sample eIUM deployment. Consider a
model deployment consisting of a usage collector, a session collector, and a correlator, as shown in
the below figure.

The usage collector obtains usage data from a network element and populates the following NME
attributes: source IP address (SrcIP), destination IP address (DstIP), destination port (DstPort),
number of bytes (NumBytes), and end time (EndTime). Typically, the usage collector stores data in
the binary format for speed and efficiency.

Similarly, the session collector obtains data from a session source and populates the following NME
attributes: login ID (LoginID), account number (AcctNbr), source IP address (SrcIP), start time
(StartTime), and end time (EndTime). The correlator combines records from the usage and session
collectors, and stores the correlated records in its datastore.

Page 287 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

Usage
Collector

NME Attrs
SrcIP
DstIP Report Tomcat
DstPort Collector Java Servlet
NumBytes Server
EndTime
Reporting
Application
Report
Correlator
Collector

Session NME Attrs


Collector Login ID
AccountNbr
NME Attrs
SrcIP
Login ID
DstIP
AccountNbr
DstPort
SrcIP
NumBytes
StartTime
StartTime
EndTime
EndTime

The report you view depends on the report collector you select, which in turn depends on its source
collector. As the figure shows, the NME attribute values available to a report collector depend on the
NME attributes populated by its corresponding source collector.

You can take advantage of a collectors ability to aggregate usage records to save data storage
space or prepare the records for consumption by the reporting tool. You can employ several rules in
a collectors aggregation scheme to reduce the number of records stored. For example, one rule
might look for well-known ports in the source port field. If found, the rule swaps the destination port
and destination IP address with the source port and source IP addresses, effectively halving the
number of usage records. Another rule might condense ranges of IP addresses into one IP address,
allowing you to put all the usage of a customer under one IP address or show all the traffic going
through one router.

Each report collector is configured with an external JDBC datastore, set up to write to the database,
as shown in the following figure.

The reporting tool also takes advantage of the variable flush time of eIUM collectors to construct
time spans in reports. Each report collector can have a distinct flush time and aging policy. One
report collector may copy NMEs directly into the database in five-minute intervals. The other report
collector might aggregate the data further and flush to the database once every hour.

Page 288 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

When you create a report via the reporting interface, the web application server obtains the list of
report collectors available (every collector with an external JDBC datastore) from the configuration
server. For each report collector, it also obtains the NME attributes available to that collector.

When you select the report collector and report type, and specify the report parameters and graph
options via the reporting interface, the web application server saves your specifications as an SQL
query against the database. The query serves as a template. You can then run reports based on the
template.

For example, a summary report takes the form of the following query:

Select <Parameter>, sum(Measure)


from <table name>
where End_Time >
<begin time of query calculated from time range and end time>
and End_Time <
<end time as selected by user>
group by <Parameter>
order by <sort order as selected by user>

When you run a report, the report engine queries the database to obtain a report based on dynamic
usage data, and then presents the report in the graphic form that you selected (for example, pie
chart or bar chart).

For example, a multiple parameter summary report based on SrcIP and DstIP is as follows:

Select sum(Num_Bytes), Src_IP, Dst_IP


from reportcollector_1
where End_Time > 1008261120
and End_Time < 1008264720
group by Src_IP, Dst_IP
order by 1 DESC

This query produces a report showing the top source and destination IP combinations according to
reportcollector_1 within the specified time range (12/13/01 9:32 AM - 12/13/01 10:32 AM).

In summary, the type of report you can view depends on the report collector you select and the
report collector. You can employ rules in a collector to process the data and specify the flush
interval to control its flow so that it can be used for reporting. Finally, you can define the report such
that it presents the values of NME attributes available to that report collector in insightful ways.

Change the Web Application Server Start-Up Configuration


You can change the start-up configuration of the web application server in two ways:

l Change the host configuration, which affects all processes running on that host, according to the
instructions described in "Managing Host Systems" in the eIUM Administrators Guide.
l Override the host configuration according to the instructions for "Overriding Default Host
Properties" in the eIUM Administrator's Guide.

Customize the Reporting Interface


The reporting interface uses JavaServer Pages (JSP) technology. You can customize the interface
look-and-feel of the interfaceincluding the appearance of the generated reportsby editing the
following JSP file:

Page 289 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

l Windows: C:\SIU\var\webappserver\webapps\reporting\en_US\run\ReportForm.jsp
l UNIX: /var/opt/<SIU_instance>/webappserver/webapps/reporting/en_US/run/ReportForm.jsp

NOTE: Back up the JSP file before making any changes.

Page 290 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 12: Creating Reports

Page 291 of 332 HP eIUM (8.0)


Chapter 13

Auditing for Revenue Assurance


Auditing in eIUM is the process of recording and analyzing eIUM metrics with the goal of supporting
revenue assurance. This topic series provides details about the audit process and instructions for
using the audit subsystem effectively.

Introduction 293

Revenue Assurance 293

Auditing Basics 294

Auditing Overview 295

Enable Auditing 295

Disable Auditing 297

Monitor Auditing 297

Set Audit Log Level 297

View Audit Log 298

View Audit Data 299

Audit Data Collection 301

Audit Points 301

Audit NMEs 303

Audit Operations 307

Session Audit 308

Correlation Audit 310

Audit Data Processing 311

Audit Verification 311

AuditAdornmentRule 312

Audit Data Storage 313

Cautions 313

Audit Tables 313

Audit History Tables 315

Audit Data Added to Collector History Tables 316

Customizing Audit 320

Custom Audit Data Collection 320

Page 292 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Custom Audit Verification 320

Analyzing Audit Reports 320

Create and Start the Audit Report Server 321

View Daily Audit Reports 322

Create Audit Reports 325

Run Audit Reports 325

Introduction
The convergence of voice and data traffic and of wireline and wireless networks presents service
providers with some difficult challenges as well as attractive revenue opportunities. As profits from
traditional communication services diminish, providers must offer new, value-added services to
ensure profitability. More importantly, providers must be able to bill their customers in flexible
billing models, quickly and accurately.

Many providers have the desire and the ability to offer new, consolidated voice and data products
with bundled discounts and one-rate packages combining different services. To pull these services
together, however, service providers must combine components from several different partners,
suppliers (switch vendors, gateway vendors, billing vendors, and so on), and their own internal
systems. Integrating distributed, product-oriented systems with new customer-oriented systems is
a challenging problem for which the eIUM software provides a solution.

The eIUM usage mediation and management platform supports pre-paid and post-paid billing
models for wireline and wireless networks carrying voice and data services. eIUM takes advantage
of a scalable, distributed architecture to collect, aggregate, and correlate usage data from your
service infrastructure and present the results to your business support systems for billing or
strategic market analysis.

Revenue Assurance
The complexity of usage-based billing and the management of IP services also underscores the
problem of revenue leakage. Although exact figures of revenue loss are rarely public data,
estimates of 5 to 15 percent of total revenue or of 100 to 200 billion dollars per year are not
uncommon. With so much at stake, stopping revenue leakage is crucial to providing IP services
profitably.

The realization that a significant fraction of potential revenue is being lost has led many
organizations to scrutinize their billing processes and systems. As a result, providers increasingly
recognize the need for an end-to-end revenue assurance program that can ensure the accurate
metering and billing for service usage in this heterogeneous, distributed environment. Revenue
assurance is the set of organizational processes designed to verify the completeness, accuracy, and
integrity of the capture, recording, and billing of revenue producing events.

Complete revenue assurance programs involve the participation of every department affecting the
revenue cycle, including product development, pricing strategy, sales and marketing, network
management, prepaid services, customer care, fraud and churn management, provisioning,
receivables management, accounting, and reporting. This comprehensive view of revenue assurance
is an ambitious goal for providers that are just beginning to develop such a program in their
organizations.

Page 293 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

To deploy a complete revenue assurance solution, the organization must typically coordinate
several complex yet fundamental processes:

l An audit process to maintain data integrity, avoid data loss, and provide business intelligence for
product planning.
l An error recovery process to repair incomplete or corrupt usage data and process the corrected
data in order to retrieve billable events from otherwise unusable records.
l A testing process to originate test calls that validate the record processing end-to-end, from the
switch to the billing system.
l A profile analysis process to compare the actual workload against the expected workload and
billing profiles, and warn of discrepancies.
l A predictive analysis process to predict the evolution of the system from end to end, and assess
the impact of future changes by combining usage and audit information with market research.
l A billing verification process to ensure that customers were billed according to their service
agreements for all the services used.
l A real-time analysis process to monitor incoming usage information in order to detect fraud and
manage prepaid calls.
Auditing and error recovery are the two cornerstones of any revenue assurance strategy because
they make all the other processes possible. Without auditing, you cannot find revenue leakages
between processes and systems. Without error recovery, you cannot repair the revenue leakages
you find.

The eIUM audit subsystem serves as the foundation of your revenue assurance strategy by
supporting both auditing and error recovery processes. The audit subsystem enables you to
generate accurate and complete bills that can withstand an external review or a customer
challenge.

Auditing Basics
Simply put, the eIUM audit subsystem works like a surveillance camera; it starts monitoring only
when you turn it on and then runs continuously in the background. You can decide what to record
and how to store the data. The stored data represents an audit trail that you can later query and
analyze to inform your business decisions.

The eIUM audit subsystem is defined by several conceptual building blocks:

l A dataset is a set of usage records related by a natural boundary such as the end of an input file
or a time-based data flush. The notion of datasets enables the audit subsystem to track
meaningful batches of usage data as they flow through eIUM collectors.
l An audit point is a programmatic hook in a collector component that represents the site at
which audit data can be captured. Audit points enable the capture of collector metrics using
audit operations. When usage data flows into, through, or out of a component that is, when a
usage NME traverses an audit point it triggers audit operations.
l An audit operation extracts a specific metric or performs specific calculations to populate an
audit NME attribute. For example, the audit point in the FilterRule of an aggregation scheme
might trigger the audit operation that counts the number of bytes in usage NMEs filtered out by
this rule. This operation would compound the number of bytes obtained from the filtered usage
NME attribute and store the total in an audit attribute.

Page 294 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

l An audit attribute holds a single metric; a set of audit attributes hold audit data in the NME
format just as standard eIUM NMEs hold usage data. An audit NME can only be used for auditing.
The eIUM audit subsystem employs four types of NMEs: Input Source Audit NMEs, Input Dataset
Audit NMEs, Output Dataset Audit NMEs, and Exception Audit NMEs (as explained later in this
chapter).
l An audit rule checks audit attributes to verify the correctness of eIUM usage data processing.
For example, an audit rule may verify that for a given collector, the number of incoming NMEs
equals the number of outgoing (filtered, processed, aggregated) NMEs.

Auditing Overview
The eIUM audit subsystem captures, processes, verifies, and stores such metrics as the number of
NMEs flowing into a collector, the number of NMEs aggregated, NMEs filtered, and NMEs passed on
to a downstream collector or application, and provides tools with which you can query the audit trail,
generate audit reports, and analyze audit data. These processes together constitute the function
known as auditing.

Careful and considered interpretation of audit data is a key part of the auditing function, but you
must also respect the limitations of the audit subsystem. eIUM audit reports can contain detailed
metrics about every input source, collector, and dataset, presenting a huge quantity of data for a
given deployment. However, the data itself cannot provide the context for analysis or produce
meaningful conclusions about revenue leakage. You must interpret and use it along with other
information sources, such as network elements and billing systems, for example, to effectively
prevent revenue loss. Therefore, auditing is a responsibility shared between the audit subsystem
and the audit administrator.

The audit subsystem performs the following tasks:

l Audit Data Collection


l Audit Data Processing
l Audit Data Storage
l Audit Report Generation
The audit administrator, in turn, performs the following tasks:

l Enables Auditing
l Monitors Auditing
l Analyzes Audit Reports

NOTE: The eIUM audit subsystem does not check data at the source, before it enters eIUM, or in
the billing system, after the data leaves eIUM. This is best performed by an end-to-end revenue
assurance tool that analyzes the audit metrics collected from the source(s), eIUM, and billing.

The following sections describe the steps involved in performing each task. Subsequent sections
provide detailed background information.

Enable Auditing
The audit subsystem simplifies the task of enabling audit by providing three predefined audit sets:
NMECount, Session, and Correlation. Because these audit sets are common for most deployments,
you need only select them to enable typical audit data collection and verification.

Page 295 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

1. In the deployment pane or the deployment map, select the collector for which you want to
enable auditing.
2. From the menu, select Action -> Edit. This displays the Collector Configuration screen.
3. Select the Audit tab.
4. Click on the Audit Enabled check box as shown below. Input source auditing is enabled by
default.

5. Under Dataset Audit, click on the NME Count check box to verify and reconcile the NME counts
for each input dataset processed by this collector (see "Audit Data Processing" (on page 311)
for more details).
6. Under Scheme Audit <scheme-name>, click on the NME Count check box to verify and reconcile
the NME counts for this scheme.
7. Under Scheme Audit <scheme-name>, click on the Session check box to enable session auditing
for this scheme (see "Session Audit" (on page 308)). Enable session auditing only if the
AggegrationScheme performs session processing (uses the SessionMatchRule).
8. Click Apply and then click OK.
9. Restart the collector to apply the changes.

Page 296 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Disable Auditing
To disable auditing, return to the collectors audit configuration screen and click the (checked) Audit
Enabled check box. When auditing is disabled, audit data is no longer collected but existing data is
not deleted.

Monitor Auditing
The eIUM audit subsystem includes an operational audit component that can help you resolve
discrepancies in the audit trail. This component captures such as operational events as switch or
collector unavailability and errors in the source file.

Operational audit events are captured in two log files:

l Collector Log: The collector log file contains various eIUM system messages that refer to errors,
warnings, informational notes, and other operational results. The collector log file also includes
audit attributes and the results of the audit verification processes. The collector log file is
located at C:\SIU\var\log\<collector>.log on Windows NT and at /var/opt/SIU/<collector>.log on
UNIX.
l Audit Log: The audit log file contains audit-related messages such as errors, informational
notes, and results depending on the log level configuration as well as audit data. It only tracks
events relevant to the verification and reconciliation of the audit trail. The audit log file is located
at C:\SIU\var\<collector>\audit.log on Windows NT and at /var/opt/SIU/<collector>/audit.log on
UNIX.
Both logs can be monitored by an automated system, such as OpenView VPO, or by an administrator.

NOTE: The term Aggregated has a more specific meaning in the audit log than in the collector log.
In the collector log, the message Aggregated # NMEs... refers to all the NMEs processed
(merged or filtered) by a scheme. In the audit log, the SchemeNMEsAggregated audit attribute
refers only to the NMEs merged.

Using operational audit involves specifying the log level and viewing the audit log, as described in the
following sections.

Set Audit Log Level


To specify the audit log level using the Launchpad interface:

1. In the Launchpad deployment pane or the deployment map, select the collector whose audit log
level you want to set.
2. From the menu, select Action -> Set Audit Log Level.

Page 297 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

3. This displays the Audit Log Level dialog.


4. Select one of the four log levels:
n OFF: Disabled
n SEVERE: Messages that indicate a critical error. An example would be messages about being
unable to connect to an input source.
n WARNING: Messages that provide information about an unexpected event that might lead to
further problems.
n INFO: Messages that indicate conditions where the component cannot operate properly or
carry out a request. These error conditions may eventually prove fatal but the component
will continue to run.
5. Click OK.
6. Restart the collector to apply the changes.

View Audit Log


To view the audit log using the Launchpad interface (via the Launchpad deployment pane or the
deployment map), select the collector whose audit log you want to view. From the menu, select
Actions -> Open Audit Log Viewer.

Page 298 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

The viewer continuously updates the display of audit data and messages. Check Pause to hold
updates while scanning the current contents of the viewer. Use the search function to find specific
information. For example, type a time of 14:07 in the Enter Search String field, select Down, and
click Find Next to display the first entry. Click Find Next to search for additional occurrences. Check
Wrap Text or use the horizontal scroll bar.

View Audit Data


eIUM provides two interfaces for viewing audit data: LaunchPad and the siuquery command. To view
audit data using the Launchpad interface:

1. In the Launchpad deployment pane or the deployment map, select the collector whose audit
data you want to view.
2. From the menu, select Actions -> Query Data.

Page 299 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

3. Under Select a Scheme, select Audit Source Summary from the menu.
4. Select an audit dataset and click View Collected Data. The audit data window shows the
summary, audit source data, and audit exception data for NMEs in the dataset.
You can also use the siuquery tool (from the command line or in a script) to query collectors for their
audit data. The collector must be running, but does not need to be on the local system. This tool
displays NME data in the datastore or in collector memory. The siuquery command has the following
options for querying audit data.

siuquery Audit Options


Option Description

-IdQuery Specifies the query is based on datasets rather than


time. Use the s and e options to specify the start
and end of the range. If you do not specify a range, all
datasets are implied.

Page 300 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Option Description

-IdRange Displays the range of dataset identifiers in the


datastore. Use this to help determine the values to
provide in the s and e options.

-s <dataset id> Specifies the starting dataset or starting NME time of


a range.

-e <dataset id> Specifies the ending dataset or the ending NME time
of a range.

-auditSources Specifies a list of all known input sources that


contributed to a dataset or a flush set. This includes
information such as source file name, source type,
starting offset, ending offset, and EOF flag.

-auditExceptions Specifies a list of audited exceptions. An audit


exception identifies audited discrepancies in the
usage data or in its processing.

-auditInputSet Specifies audit data about the input dataset as in its


entirety (the total number of input dataset NMEs,
Dataset Id, and so on).

-auditOutputSet Specifies audit data about an output dataset


produced by the specified scheme. This includes
information such as the number of NMEs aggregated,
number of NMEs filtered, number of NMEs out, and so
on.

See the eIUM Command Reference for additional information about siuquery.

Audit Data Collection


The audit subsystem collects key metrics about usage data flowing into, through, and out of a
collector. These metrics can help you trace the flow of usage data through a given deployment
from the network elements through the eIUM mediation platform on to the consuming applications.

The audit data collection process is defined by three logical entities:

l Audit pointswhere to collect audit data


l Audit operationshow to collect audit data
l Audit NMEswhat information to collect
The following sections describe these entities in detail.

Audit Points
Audit points are configurable, programmatic hooks that trigger audit operations when usage data
flows through an eIUM collector. Audit points are present at the encapsulator and aggregator
components of a collector, as illustrated in the following figure:

Page 301 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Collector

Aggregator

Encapsulator

Datastore

Audit Points

Audit points are not embedded in the datastore because the eIUM audit subsystem stores audit
data in the datastore, and so it cannot provide an independent review of its operation. Nevertheless,
certain operations of the datastore are written to the audit log. The datastore generates log
messages and operational audit messages about the persistence of usage data to the backing
store and the movement of usage data (due to aging, for example).

Encapsulator Audit Points


Because they interface directly with the input sources, audit points in the encapsulator are ideally
positioned to capture metrics such as the type of input data sources, the volume of usage records
entering the collector, and any errors that occur during the input of usage records.

Such audit information is stored in the Input Dataset Audit NMEs and Input Sources Audit NMEs as
described in Input Dataset Audit NME and "Input Source Audit NME" (on page 304).

Aggregator Audit Points


The aggregator audit points capture metrics and errors before and during the processing of usage
data by aggregation schemes. Audit data captured at the aggregator includes such information as
the number of input or output records per scheme, number of records aggregated, number of
records failing validation, counts of specific types of input or output records, and so on.

The following audit points collect audit data when usage data enters or leaves the aggregator:

Aggregator Audit Points


Name Purpose

AggregatorInput To audit all NMEs coming into the Aggregator.

AggregatorFlush To audit information available at flush time.

AggregatorPurge To audit metrics from the datasets that are being purged.

An aggregator contains one or more aggregation schemes set up in parallel. The following audit
points collect audit data when the usage data enters an aggregation scheme:

AggregationSchemeInputTo audit all NMEs coming into the scheme.

Rules are the primary site at which audit data can be collected. Audit points are embedded in certain
standard rules as shown below:

Page 302 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Rule Audit Points


Name Point Purpose

CorrelatorMatchRule Find To audit usage NMEs as they are correlated to sessions.

RangeCorrelatorMatchRule Find To audit usage NMEs as they are correlated to sessions.

AggregationRule Create To audit when new NMEs (leaf nodes) are created in the
aggregation tree.

Aggregate To audit when NMEs are aggregated, or merged with


other NMEs in the aggregation tree.

FilterRule Filter To audit NMEs filtered out by this rule.

BusinessRule Filter To audit NMEs filtered out by this rule.

ConditionalRule True To audit NMEs when the condition is true.

False To audit NMEs when the condition is false.

FlushProcessor Store To audit NMEs to be flushed (stored).

StoreRule Store To audit NMEs stored.

Audit NMEs
The audit subsystem takes advantage of the eIUM NME schema to hold audit information in NME
attributes dedicated to auditing. Audit NME attributes are just like any other NME attributes except
they are used only for auditing. You can also capture information specific to your eIUM deployment
by extending the NME schema with custom audit attributes.

The eIUM audit subsystem employs four types of audit NMEs: Input Dataset Audit NMEs, Input
Source Audit NMEs, Output Dataset Audit NMEs, and Exception Audit NMEs. You can customize the
Input Dataset Audit NME and the Output Dataset Audit NME.

The following figure shows the generation of audit NMEs:

Collector
Input Source
Aggregator
Scheme 1

Scheme 2
Input Source Output
Encapsulator

Input Source

Input Output Output


Dataset Dataset Dataset
Audit NME Audit NME Audit NME

Input Source
Audit NMEs

Exception
Audit
NMEs

The following sections provide details about each NME type.

Page 303 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Input Source Audit NME


Depending on the configuration, a collector may gather usage data from multiple input sources and
combine that information when it flushes to the datastore. Audit information for a collector may
thus include data from several input sources. The audit subsystem uses one input source audit NME
for each input source.

IUM 3.1 provided support for only dataset auditing. A collector could gather data from multiple input
sources, but flush occurred at the end-of-file (EOF) for each input dataset file. Accordingly, there
was a one-to-one mapping between the input source and the input dataset audit NME. In contrast,
IUM 4.0 supports time-based as well as dataset audit configurations, so there may be one or more
input source audit NMEs per flush.

The input source audit NMEs have the following attributes:

Input Source Audit NME


Audit Attribute Description

DatasetSourceInfo The source of the dataset indicated as follows:

l demo collector: Demo Collector


l leaf file collector: [<host>:]<file name>
l non-leaf collector: <collector>/<scheme>[/<DatasetID>]
l leaf GSN collector using GTP: <GSN host>/<port>

DatasetSourceType An integer value indicating the type of the input data source as follows:

l 0: Unknown
l 1: Another collector
l 2: A file
l 3: Network source (for example GTP)
l 4: Demo Encapsulator

SourceStart Byte offset to the start of the input file or the start time for the flush set

SourceEnd Byte offset to the end of the input file or the end time for the flush set

SourceEOF End of File marker for a file source:

l 0: Input was not processed through the end of input source


l 1: Input was processed through the end of the input source

SourceNMEsIn Number of NMEs received from that source

NOTE: The audit subsystem supports source audit for file and collector sources but not network
protocols, with the exception of the GTP protocol.

Page 304 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Input Dataset Audit NME


The input dataset audit NME holds audit information related to each flushed batch of data entering
the collector. It consists of the following attributes:

Input Dataset Audit NME


Audit Attribute Description

DatasetExceptions The number of exceptions found in the dataset

DatasetFlushTime The time at which the dataset was flushed

DatasetID A unique integer representing the dataset or flush set, incremented by 1 for
each new flush

DatasetNMEsIn The number of NMEs that were sent to the aggregator for the current
dataset or flush set. This attribute is available only if the NMECount audit set
is selected.

EndTime The end time of the NMEs flushed.

Info The name of the file that contains the audit usage information.

SourcesNumIn Number of input sources.

StartTime The start time of the NMEs flushed.

If auditing is enabled, there is always one input dataset audit NME per flush.

Output Dataset Audit NME


When the NMECount audit set is selected during configuration, the audit subsystem obtains and
stores the following audit NME attributes for each dataset that enters an aggregation scheme.

Output Dataset Audit NME


Audit Attribute Description

SchemeNMEsIn The number of NMEs that were received by the aggregation scheme for
the current dataset.

SchemeNMEsAggregated The number of NMEs in the dataset that were aggregated, or merged
with other NMEs.

SchemeNMEsCreated The number of NMEs (leaf nodes, not NMEGroups) created in the
aggregation tree by the aggregation scheme.

SchemeNMEsFiltered The number of NMEs in the dataset dropped from processing by the
aggregation scheme.

SchemeNMEsOut The number of NMEs sent out from the aggregation scheme for the
dataset.

Page 305 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Exception Audit NME


The audit subsystem enables you to gather metrics about NMEs at two levels of granularity: dataset
or flush set and individual usage NME. In many cases, the dataset or flush set granularity is
sufficient. But when an error occurs with a specific NME, you may need to capture audit metrics on
precisely that NME so it can be reconciled later.

For example, if a session logout message was missed, you need to identify exactly which session ID
had a missing logout. If a parse error occurs in the dataset input, you need to know not only how
many parse errors occurred but also where they occurred. Parse errors can be particular interesting
because they usually indicate input source corruption (often due to failing hardware or a bad
configuration). Such errors, caused by unprocessed data, can result in significant revenue loss if
they are not repaired.

Exception audit NMEs capture such information without incurring an excessive processing overhead.
Exception NMEs are created whenever an exception condition is encountered while processing data.
Using the information in these NMEs, you can trace a problem back to the record that caused it.

Currently, exception NMEs are generated by the SimpleSessionState, GPRSSessionState,


RecordEncapsulator, NetflowEncapsulator, RangeCorrelatorMatchRule, and CorrelatorMatchRule.

For each exception the following information is recorded in the NME:

Exception Audit NME


Audit Attribute Description

AuditExceptionID Identifies the exception. For example, ID 5 indicates that there was a
parser error.

AuditExceptionNMEID Identifies the NME context associated with the exception. For example, the
GPRS sequence number and ID of the session record
(<SessionID><SequenceNumber>).

AuditExceptionSource Helps you locate the source of the exception.

Audit Exceptions
ID Cause AuditExceptionNMEID Class

1 Record sequence error <session- GPRSSessionState


id>:<sequence-num>

2 Duplicate logout <session- GPRSSessionState


id>:<sequence-num>

3 Duplicate login <session- GPRSSessionState


id>:<sequence-num>

4 Duplicate interim record <session- GPRSSessionState


id>:<sequence-num>

5 Error in parsing input record input record RecordEncapsulator

6 Missing logout and login <matched-id>:id

Page 306 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

ID Cause AuditExceptionNMEID Class

7 Missing logout <session-id>

8 Missing login <session-id>

9 Duplicate login <session-id>

10 End record is missing (interim record upgraded <id> GPRSSessionState


to end record)

11 Start record is missing (interim record <id> GPRSSessionState


upgraded to start record)

12 Missing session for usage IP address <ip-address> CorrelatorMatchRule

13 Duplicate session for usage IP address <ip-address> CorrelatorMatchRule

14 Missing session for usage IP address range <ip-range> RangeCorrelatorMatchRule

15 Missing session for usage IP address range <ip-range> RangeCorrelatorMatchRule

16 Problem receiving the packet <packet>:<port> NetflowEncapsulator

17 Problem processing the packet header <packet>:<port> NetflowEncapsulator

18 Problem parsing the packet <packet>:<port> NetflowEncapsulator

19 Unknown problem in processing the packet <packet>:<port> NetflowEncapsulator

Audit Operations
An audit operation extracts a metric or performs a calculation to obtain the value of an audit NME
attribute. Audit operations can read usage NME attributes, but not modify them. For example, you
might count all the NMEs filtered out at the FlushProcessor. The audit operation would simply add 1
to an audit NME attribute for each NME filtered out.

Audit operations use the following syntax:


<AP>AuditOp=<audit-attribute>,<audit-operation>,<operand>

where

l <AP> is the name of an audit point


l <audit-attribute> is an attribute set aside for auditing in the NME schema.
l <audit-operation> is either add, subtract, min, max, or set. Numeric and time NME
attributes can use any of these operations. Other NME attribute types can only use the set
operation.
l <operand> is either a fixed value such as a number or string, or a usage NME attribute.
The following table provides examples of audit operations:

Page 307 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Audit Operations
Operation Description

A1,add,A2 Add the value of A2 to A1.

A1,subtract,A2 Subtract the value of A2 from A1.

A1,min,A2 Store the smaller of A1 and A2 in A1.

A1,max,A2 Store the larger of A1 and A2 in A1.

A1,set,A2 Copy the value of A2 to A1.

In the following example, the audit operation adds the values of the dataVolumeGPRSDownlink
NME attribute from usage NMEs dropped by the FilterRule. The Filter audit point is defined in the
FilterRule. The SchemeVolumeFiltered audit attribute is defined in the NME schema.

FilterAuditOp=
SchemeVolumeFiltered,add,dataVolumeGPRSDownlink

Session Audit
The eIUM session architecture consists of a basic session match rule and session state, which can be
extended by new session match rules and states for new protocols. The audit subsystem supports
session auditing of complex GPRS sessions as well as simple sessions.

Simple Session Auditing


Most basic session collectors support session open, session continuation, and session close records.
Ideally, complete records are received in order. In practice, however, records coming into eIUM are
occasionally missing, out of order, or duplicates, which may result in lost revenue or customer
overcharges. Auditing can provide the information you need to find, repair, and process billable
records that would otherwise have been lost. The following table lists session audit points:

Simple Session Audit Points


Name Purpose

MissingLogins To audit missing logins

MissingLogouts To audit missing logouts

DuplicateLogins To audit duplicate logins

Logins To audit logins

Logouts To audit logouts

MissingLoginAndLogout To audit missing logins and logouts

The following table shows the audit information captured:

Page 308 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Simple Session Audit Information


Audit Attribute Description

SessionMissingLogins The number of missing login records

SessionMissingLogouts The number of missing logout records

SessionDuplicateLogins The number of duplicate login records

SessionLogins The number of login records

SessionLogouts The number of logout records

SessionMissingLoginAndLogout The number of sessions with missing logins and logouts

GPRS Session Auditing


The GPRS protocol differs from the standard session model in that GPRS sessions may last over a
very long duration. To handle the long session duration, GPRS has several new session records in
addition to the standard session open and session close records.

GPRS Session Records


Record Description

Standalone Indicates that a session opened and closed in the duration between record postings.
For example, if a user opened a session, made a phone call, downloaded a weather
report, and then closed the session, all within the duration.

Interim Posted after a fixed time period (for example, every hour) to record current activity
and keep the session open.

Upgraded Created if the current record indicates that a previous record was lost. For example,
if a session open record is lost, then the first record received is an interim record.
That interim record is upgraded to a session open to compensate for the previous
record loss.

Handover When mobile GPRS users move from one SGSN switch to another, this record
indicates the transfer.

The following table lists GPRS Session audit points, implemented in the GPRSSessionState class.

GPRS Session Audit Points


Name Purpose

DuplicateLogins To audit duplicate login records per scheme

DuplicateLogouts To audit duplicate logout records per scheme

Logins To audit login records per scheme

Logouts To audit logout records per scheme

StartUpgrades To audit interim records that have been upgraded to start records

Page 309 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Name Purpose

EndUpgrades To audit interim records per scheme that have been upgraded to end records

DuplicateInterims To audit duplicate interim records

Interims To audit interim records

StandAlones To audit standalone records

Gaps To audit gaps in the session

HandOvers To audit SGSN handovers

The following table lists the GPRS Session audit attributes.

GPRS Session Audit Information


Audit Attribute Description

GPRSSessionDuplicateInterims Number of session interims with duplicate IDs

GPRSSessionDuplicateLogins Number of session logins with duplicate IDs

GPRSSessionMissingLogouts Number of missing logout records per scheme

GPRSSessionDuplicateLogouts Number of session logouts with duplicate IDs

GPRSSessionLogouts Number of logout records per scheme

GPRSSessionLogins Number of login records received per scheme

GPSRSessionMissingLogins Number of missing login records per scheme

GPRSSessionEndUpgrades Number of interim session records that were upgraded to end


records

GPRSSessionMissingInterims Number of missing interim records per scheme

GPRSSessionInterim Number of interim session records

GPRSSessionStartUpgrades Number of interim session records that were upgraded to start


records

GPRSSessionStandalones Number of standalone sessions

GPRSSessionGaps Number of gaps (missing records) in the sessions

GPRSSessionHandOvers Number of SGSN handovers

Correlation Audit
Correlation is association of incoming usage records with an appropriate session so that the
customer can be billed. A correlating collector uses either CorrelatorMatchRule or
RangeCorrelatorMatchRule depending on whether the session is defined by a single IP address or a
range.

If auditing is enabled, the audit subsytem uses the following audit points for each rule:

Page 310 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Name Purpose

ExceptionUncorrelated If exception auditing is not disabled, to identify the uncorrelated


usage NMEs and log the cause of error.

SchemeNMEsCorrelated To audit correlated NMEs.

SchemeNMEsUncorrelated To audit uncorrelated NMEs.

SessionNMEsIn To audit incoming session NMEs.

UsageNMEsIn To audit incoming usage NMEs.

The audit subsystem collects the following information at these points:

Audit Attribute Description

SchemeNMEsCorrelated Number of usage NMEs correlated to sessions during this flush.


Normally, this number should match the number of usage NMEs.

SchemeNMEsUncorrelated Number of usage NMEs that could not be correlated to sessions. A


non-zero value suggests that revenue is being lost because records
are not being be billed. This count is an important flag that you must
take investigate and resolve a problem.

SessionNMEsIn Number of session NMEs processed by the rule.

UsageNMEsIn Number of usage NMEs processed by the rule.

These counts are compared during audit verification, as described in the following section.

Audit Data Processing


Audit data processing involves audit verification, in which the audit subsystem validates the NME
counts, and more complex audit NME transformations effected by audit rules. The subsystem
performs audit data processing functions within the aggregator:

l Checks usage data and audit information to validate and reconcile the information and generate
operational audit log entries
l Applies a rule chain to the audit NMEs in order to perform additional validation or adornment
before passing it to the datastore
l Resets the dataset, source, and scheme audit attribute values after they are persisted to the
datastore

Audit Verification
In addition to collecting and reporting audit data, the audit subsystem can also verify that the usage
data was processed correctly using audit rules. You can also create new rules to customize the
audit subsystem.

One such audit rule verifies the number of NMEs flowing through each aggregation scheme. When
the NMECount audit set is selected during configuration, the audit subsystem verifies that the
following equation is true and logs a warning if false:

I=F+A+D

Page 311 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

l I: Number of NMEs coming into the aggregation scheme


l F: Number of NMEs filtered out
l A: Number of NMEs aggregated (that is, merged with other NMEs)
l D: Number of NMEs sent to the datastore by the aggregation scheme.
For example, if 100 NMEs come into a scheme, 25 are filtered out, and 35 are aggregated, the
number of NMEs going out should be 40. If the counts are correct, this information is logged only in
the collector's log file. If not, the information is also logged in the audit log file.

You can create a custom audit rule to do the same for session collectors.

Another audit rule verifies the number of NMEs through the correlator. When the Correlation audit
set is selected during configuration, the audit subsystem verifies that the following equations are
valid and logs a warning if false:

DatasetNMEsIn = SessionNMEsIn + UsageNMEsIn

UsageNMEsIn = SchemeNMEsCorrelated + SchemeNMEsUncorrelated

If the counts are correct, this information is logged only in the collector's log file. If not, the
information is also logged in the audit log file.

AuditAdornmentRule
In addition to the verification rules, the audit subsystem provides the AuditAdornmentRule
component, which enables you to insert a custom audit point in a rule chain and collect audit metrics
and generate log messages.

Configuration Attribute Description

AuditAttributeName The audit attribute name representing the information you want to
collect. For example: MissingLogins. The attribute name must be in the
NME schema.

AuditPointName Name of the audit point. For example: MyAuditPoint

LogMessagFormatString The format of the log message to be generated. For example: Login
+SessionID+ +%DATESTAMP

Optional

LogLevel The level of detail in the log.

Optional

Default: INFO

DateFormat The format of the date stamp in the log.

Required if %DATESTAMP is used in the LogMessageFormatString


attribute.

Default: MMddyyHHmmss

Page 312 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

The following configuration example shows how to configure the AuditAdornmentRule to increment
the DroppedNMECount audit attribute value:

[/collector/Aggregator/scheme0/AuditAdornmentRule]
ClassName=com.hp.siu.collector.rules.AuditAdornmentRule
AuditAttributeName=DroppedNMECount
AuditPointName=MyAuditPoint
LogMessageFormatString="NME + ID + was dropped
LogLevel=INFO

The rule also generates an audit log message at the INFO level.

Audit Data Storage


Audit data is stored along with usage data in the local datastore of each audited collector. The audit
subsystem employs the eIUM MuxDatastore component to store audit data. If the audited collector
does not use the MuxDatastore component, the audit subsystem creates it and incorporates the
existing datastore.

The datastore ordinarily creates one backing store for each scheme in order to store usage data. If
auditing is enabled, the datastore creates two additional backing stores (auditSrc and audit_ds) to
manage global audit statistics such as source file name, number of records processed successfully,
number of records with errors, and so on. The stores are transactionally updated at flush time.

The datastore generates log messages and operational audit messages about the persistence of
usage data to the backing store and the movement of usage data.

In general, audit data should age at the same time or later than usage data. The audit subsystem
supports two aging policies for audit data:

l Aging based on NME FlushTime (recommended).


l Aging based on the number of datasets
l Aging based on End Time
By default, audit data is aged according to the same policy as usage data. As the candidate usage
datasets are selected (by dataset ID) for deletion, the corresponding audit datasets are also
identified. Although we do not recommend it, you can override the inheritance of TableAgingPolicy
and TableAgeLimit by specifying AuditAgingPolicy and AuditAgeLimit configuration attributes.

Cautions
l Table rolling is not encouraged with auditing because it can corrupt the audit trail.
l If you use aging based on the number of datasets or on End Time for audit data, you must use
the same aging policy for usage data.
l Do not set the AuditAgeLimit lower than the TableAgeLimit.

Audit Tables
Every collector in a deployment has a history table in the database for each aggregation scheme as
shown in the following figure. The history table is used internally by eIUM to facilitate querying and
aging. The history table contains metadata about the database tables or files that store the NMEs.
The history table holds information such as the start and end time of the dataset, the dataset flush
time, the dataset identifier and so forth. Each row in the history table holds information about one

Page 313 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

dataset. The history table records when each flush occurred and in which file or database table the
flushed data resides. The history table is an internal table that has no external configuration
options.

When audit is turned on for the collector, the existing history table is expanded to include a few
more columns that hold auditing information as shown in the next figure. The number and type of
additional columns depends on the type of auditing that is enabled. "Audit Data Added to Collector
History Tables" (on page 316) describes the columns added to the history tables when audit is
enabled for the collector.

Enabling audit also creates two additional internal aggregation schemes in the collector as shown in
the following figure. These two internal schemes have no external configuration options. They are
self-configured and hold business logic to enable auditing. Similar to how the history tables are
created for every aggregation scheme in a collector, two additional history tables are created for
the internal audit schemes when audit is turned on as shown below.

Page 314 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

These two history tables are named:

l <CollectorName>_AUDITSRC_HISTORY
l <CollectorName>_AUDIT_DS_HISTORY
For example, in the above figure, the audit history tables would be named C1_AUDITSRC_HISTORY
and C1_AUDITS_DS_HISTORY. The naming convention, the aging behavior and scope of the two audit
history tables is similar to that of the history tables created for the collector's aggregation
schemes. The contents of these audit history tables is described in "Audit History Tables" (on page
315). The additional columns of audit data added to the regular aggregation schemes is described in
"Audit Data Added to Collector History Tables" (on page 316).

Audit History Tables


Whenever audit is enabled for a collector, two audit aggregation schemes and two corresponding
audit history tables are created. This section describes the contents of the two history tables
created for the two internal audit aggregation schemes.

The tables are not dependent on the type of auditing that is enabled. The two audit history tables
that are created when audit is turned on are named as follows:

l <CollectorName>_AUDITSRC_HISTORY
l <CollectorName>_AUDIT_DS_HISTORY
The <CollectorName>_AUDITSRC_HISTORY table contains the following information:

Contents of the <CollectorName>_AUDITSRC_HISTORY Table


Column Name Contents Type

STARTTIME Start time of the dataset in UTC format. INTEGER

ENDTIME End time of the dataset in UTC format. INTEGER

ID Row identifier. INTEGER

Page 315 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Column Name Contents Type

INFO Path where the NME file containing the audit NMEs for the dataset are VARCHAR
stored. The NME attributes are EndTime, StartTime, DatasetSourceType,
DatasetSourceInfo, SourceNMEsIn, SourceStart, SourceEnd, SourceEOF.

DATASETID Dataset identifier. INTEGER

DATASETFLUSHTIME Time at which the dataset was flushed. INTEGER

The <CollectorName>_AUDIT_DS_HISTORY table contains the following information:

Contents of the <CollectorName>_AUDIT_DS_HISTORY Table


Column Name Contents Type

STARTTIME Start time of the dataset in UTC format. INTEGER

ENDTIME End time of the dataset in UTC format. INTEGER

ID Row identifier. INTEGER

INFO Path where the Audit_DS_NME file is stored. The NME file holds the VARCHAR
audit NME schema. The NME attributes are EndTime, StartTime,
AuditExceptionNMEId, AuditExceptionId and AuditExceptionSource.

DATASETID Dataset identifier. INTEGER

DATASETFLUSHTIME Time at which the dataset was flushed. INTEGER

SOURCESNUMIN Specifies the number of data sources from which the input NMEs enter INTEGER
the scheme.

DATASETEXCEPTIONS Exceptions, if any. INTEGER

DATASETNMESIN Specifies the number of NMEs or records that were present in the INTEGER
particular dataset that was flushed.

NOTE:The column DATASETNMESIN is added to the AUDIT_DS_HISTORY table only when a specific
type of audit called the Dataset Audit - NME Count is enabled. This column specifies the number
of NMEs or records that were present in the particular dataset that was flushed.

Audit History Table Aging and Rolling


The audit-specific history tables (<CollectorName>_AUDITSRC_HISTORY and <CollectorName>_
AUDIT_DS_HISTORY) behave in the same way as the collector aggregation scheme's history tables.
Aging and rolling of these tables occurs as specified by the configured aging policy, that is, as
specified by the TableAgingPolicy, TableAgeLimit and the TableRollLimit configuration attributes of
the datastore component. See the eIUM Component Reference for configuration details on all the
datastores.

Audit Data Added to Collector History Tables


This section describes the audit data added to the collectors history tables when audit is enabled
for the collector. There are two types of auditing that can be enabled: Dataset Audit and

Page 316 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Aggregation Scheme Audit.

l Dataset Audit provides audit information on each dataset that enters the collector.
l Aggregation Scheme Audit provides audit information on each NME that enters and exits each
aggregation scheme.
The information added to each aggregation schemes history table depends on which type of audit
is enabled.

Dataset Audit - NME Count


Dataset Audit - NME Count enables audit at the dataset level. Turning on this audit monitors the
NME count such as the number of NMEs flushed per dataset after processing. Simply put, the
Dataset Audit NME Count provides the number of NMEs in each dataset. Dataset Audit NME Count is
enabled by default when you turn audit on with the Launchpad. You can turn off Dataset Audit NME
Count by unchecking the box next to NME Count (General) under Dataset Audit in the Launchpad,
as shown below. You display this screen by double clicking on a collector in the Launchpad and
selecting the Audit tab.

Aggregation Scheme Audit


Aggregation Scheme Audit is a more comprehensive audit that provides auditing for each
aggregation scheme. Scheme Audit actually provides three different types of auditing that can be
turned on based on the type of the collector and the type of information that needs to be
monitored. The types of Aggregation Scheme Audit are:

l NME Count - Provides the number of NMEs at different stages of processing, such as the number
of NMEs that enter the scheme and the number of NMEs filtered, aggregated, duplicated and so
forth and then flushed out.
l Session - Provides audit metrics for session collectors, such as all sessions opened, closed,
dropped and duplicated.
l Correlation - Provides audit metrics for the session NMEs and usage NMEs which are correlated
or uncorrelated in a correlator collector.
Each of these audit types can be enabled or disabled separately for each aggregation scheme. The
information added to each aggregation schemes history table depends on which type of audit is
enabled. The figure below shows a portion of the Launchpad with these three types of audit for two
aggregation scheme named StoreData and AggregateData. You display this screen by double
clicking on a collector in the Launchpad and selecting the Audit tab.

Page 317 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Scheme Audit - NME Count

This type of audit provides the number of NMEs at different stages of processing, such as the
number of NMEs that enter the scheme and the number of NMEs filtered, aggregated, duplicated
and so forth and then flushed out. The aggregation scheme NME count audit is enabled by default
when auditing is enabled in the collector. This can be turned off by deselecting the option next the
NME Count (General) under the Scheme Audit <aggregation scheme name> panel of the
Launchpad, as shown in the screen above. When this Scheme Audit - NME Count is turned on, the
additional columns shown in the table below are added to the aggregation scheme's history table.
These are all INTEGER types which indicate the NME count at the end of processing through all the
rules in that scheme.

NME Count Audit Columns Added to the Scheme History Table


Column Name Contents Type

SCHEMENMESIN The number of NMEs that entered the scheme for processing. INTEGER

SCHEMENMESADDED The number of NMEs that were added to the existing NMEs. INTEGER

SCHEMENMESDUPLICATE The number of NMEs that were detected as duplicates. INTEGER

SCHEMENMESFILTERED The number of NMEs that were filtered out by the FilterRule or INTEGER
dropped by a ConditionalRule.

SCHEMENMESCREATED The number of NMEs, if any, that were created due to some INTEGER
condition in one of the rules.

SCHEMENMESAGGREGATED The NMEs that were aggregated by the AggregationRule. INTEGER

SCHEMENMESOUT The number of NMEs that were actually sent to the datastore, INTEGER
taking into account all the additions and deletions of NMEs while
traversing through the rule chain.

Scheme Audit - Session

This provides audit metrics for all sessions opened, closed, dropped and duplicated. Session audit
can be turned on only for session collectors, which are collectors with a SessionMatchRule,
SimpleSessionState and so forth. You turn on session audit by selecting the Session (General) box
under the appropriate aggregation scheme (IPRangeSession in this case) in the Launchpad as shown
below.

Page 318 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

When session audit is enabled as shown above, the columns described in the table below are added
to the history table of the collector's aggregation scheme.

Session Audit Columns Added to the Scheme History Table


Column Name Contents Type

SCHEMENMESSOURCE The input source of the NMEs. VARCHAR

SESSIONMISSINGLOGINS Session logouts that occur without a login. These logins INTEGER
are stored as missing session logins.

SESSIONMISSINGLOGOUTS Session logins that occur without a logout after it. INTEGER
These missing logouts are stored in this column.

SESSIONDUPLICATELOGINS Duplicate logins for the same session (without a logout INTEGER
in between).

SESSIONLOGINS Total number of logins. INTEGER

SESSIONLOGOUTS Total number of logouts. INTEGER

SESSIONMISSINGLOGINSANDLOGOUTS Number of missing logins and logouts. INTEGER

Scheme Audit - Correlation

This provides audit metrics for the session NMEs and usage NMEs which are correlated or
uncorrelated in a correlator collector. Correlation audit works only for collectors that have two
correlated input sources, which is a collector with a CorrelatorMatchRule or a
RangeCorrelatorMatchRule.

You turn on correlation audit by selecting the Correlation (General) box under the appropriate
aggregation scheme (Correlated in this case) in the Launchpad as shown below.

When correlation audit is turned on as shown above, the following columns are appended to the
history table of the collector's aggregation scheme:

Page 319 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Correlation Audit Columns Added to the Scheme History Table


Column Name Contents Type

SCHEMENMESSOURCE The input source of the NMEs. VARCHAR

SESSIONNMESIN The number of session NMEs entering the correlator. INTEGER

USAGENMESIN The number of usage NMEs entering the correlator. INTEGER

SCHEMENMESCORRELATED Total number of NMEs that were correlated. INTEGER

SCHEMENMESUNCORRELATED Number of NMEs that were not correlated. INTEGER

Customizing Audit
You can customize the eIUM audit subsystem to gather custom audit information and perform
verification operations specific to your deployment. Customizing the audit subsystem involves
extending its collection and processing capabilities.

Custom Audit Data Collection


Extending the collection capabilities of the audit subsystem involves the following broad tasks:

l Analyze the audit data from the standard audit configuration and determine the additional audit
metrics that you want to collect.
l Based on the metric, select the audit NME type that is best suited to hold this information. Note
that you can extend the Input Dataset Audit NME and the Output Dataset Audit NME, but not the
Input Source Audit NME or Exception Audit NME.
l Specify the NME attributes that will hold the custom information. Audit attribute names must be
unique across all audit NMEs. For example, you cannot use the same attribute name in the Input
Dataset Audit NME and the Output Dataset Audit NME. Doing so will cause a configuration
exception. Existing NME attribute names are also reserved for use by eIUM.
l Extend the NME schema by adding the custom audit attributes.
l Determine the audit point at which the metrics are to be collected and the audit operation that
will obtain this information. Modify the collector configuration by adding the audit operations and
restart the collector.

Custom Audit Verification


You can create your own custom audit verification rules in Java. The simplest way approach is to
sub-class the BusinessRule, which provides simple methods for extracting NME attribute values, and
specify the audit NME attributes in the configuration. Your custom rule operates on the specified
audit NME attributes.

Analyzing Audit Reports


The following sub-sections show you how to gather audit data from any audit-enabled collector in
your deployment and view reports based on cumulative information. This end-to-end view of the
audit data across your deployment can help you detect errors in the collection and processing of
usage data, and to correct these errors.

Page 320 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

eIUM provides a preconfigured collector called the Audit Report Server that gathers audit data from
audit-enabled collectors. The Audit Report Server ignores collectors for which auditing has not been
enabled.

Report Collector Tomcat Java


Audit Servlet Server
audit
Report Browser
data Reporting
Server
Report Collector Application

In order to generate and view audit reports, you must enable the Audit Reporting web application
and create the Audit Report Server. After the Audit Report Server is started, you can use the Audit
Reporting web application to generate and view audit reports on a browser.

NOTE: During activation, the eIUM Activation Wizard prompts you to activate the Audit Reporting
web application. You must select this option on at least one host in your deployment. If you did
not activate the Audit Reporting web application at that time, run the activation wizard again and
select it. Refer to the Installation Guide for detailed instructions.

We recommend that you run the Audit Report Server and the Audit Reporting web application on the
same host. If not, you must edit the configuration of the audit report server collector to include
either its name (if DNS configured properly) or IP address in the Database field as follows:
Database=jdbc:mysql://localhost:3306/siu20

Create and Start the Audit Report Server


To create and start the Audit Report Server using Launchpad:

1. In Launchpad, select File -> New or click New Collector. The New Collector dialog is displayed.
2. In the Template Selection pane, select Factory Templates. The table of pre-configured
collectors is populated.
3. Select Audit -- Report Server and click Next.
4. In the Collector Setup pane, select the host on which you want to run the Audit Report Server.
We recommend that you select the host on which you activated the Audit Reporting web
application so that the web application can perform better while retrieving audit data. If not,
you will need to edit the configuration of the audit report server manually.
5. In the Collector Setup pane, specify AuditReportServer as the name of the new collector.
6. Click Create and then Close. You do not need to set the collector configuration.
7. In the deployment pane or the deployment map, select the AuditReportServer.
8. From the toolbar, click Actions -> Start, or right-click and select Start.
You can also use the Audit Reporting web application to create an Audit Report Server. However, you
cannot use the Audit Reporting web interface to start it. You must return to Launchpad to start the
Audit Report Server, or you can use the Operations Console to start the process. See the eIUM
Operations Console User Guide for more information on starting and stopping processes.

CAUTION: We recommend that you run only one Audit Report Server in your deployment. Running
more than one Audit Report Server can cause conflicts in the database tables.

Page 321 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

The Audit Report Server is now collecting audit data from audit-enabled collectors in your
deployment. The Audit Report Server must typically run for several hours in order to collect
sufficient data. After it has collected enough data, you can generate and view audit reports as
shown in the following sections.

View Daily Audit Reports


To view the audit report for a specific day using Launchpad:

1. In the deployment pane, select Deployment.


2. Right-click and select Daily Audit Report. The Generate Daily Audit Report dialog is displayed.

3. Specify the day for which you want to view the report.
4. Select one or more report types. See the following sections for details.
5. Specify the format of the report.
6. Click Run.

Exception Report
The exception report shows the exceptions encountered during audit data collection or processing,
as shown in the following sample:

Page 322 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Collector Data Output Report


The collector data output report shows audit information about output (last-level) collectors in the
collector hierarchy.

Total Summary Report


The total summary report shows a summary of audit information regarding the input and output of
all audit-enabled collectors in the deployment.

Collector Data Source Report


The collector data source report shows the input source datasets, as shown in the following sample:

Correlation Report
The correlation report shows audit data from all the audit-enabled correlation collectors in your
deployment, allowing you to monitor the status of uncorrelated NMEs.

Page 323 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Inter-Collector Report
The inter-collector report shows audit information about inter-collectors (intermediate collectors in
a hierarchy) and their status relative to the leaf collectorsnumber of NMEs by which it lags, and is
late or early.

Session Report
The session report shows audit data from all the audit-enabled session collectors in your
deployment, as shown in the following sample:

Page 324 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Create Audit Reports


Creating an audit report means creating a report template based on a report type, report title, and
audit-enabled collector. Later, when you run this report, the template is populated with audit data
and displayed graphically. To create an audit report:

1. In Launchpad, select Tools -> Web Applications. This launches the web browser and loads the list
of eIUM Web Tools configured for your deployment. If the browser does not launch
automatically, run the browser and enter the URL: http://<hostname:port>/auditreports, where
hostname refers to the system on which the Audit Reporting web application is running and
port is the default (8159) or the port number you specified during activation.
2. Click Home and then click eIUM Audit Reports.
3. In the eIUM Audit Reports page, click Create.
4. In the Audit Report Selection page, select a report type and click Next.

5. Specify the report title and click Next.


6. Select an audit-enabled collector from the drop-down list and click Save.

Run Audit Reports


To run an audit report:

1. In Launchpad, select Tools -> Web Applications. This launches a browser and loads the list of
eIUM Web Tools configured for your deployment. If the browser does not launch automatically,
run the browser and enter the URL: http://<hostname:port>/auditreports, where hostname
refers to the system on which the Audit Reporting Web application is running and port is the
default (8159) or the port number you specified during activation.
2. Click Home and then click eIUM Audit Reports.
3. In the eIUM Audit Reports page, select a report title from the list.

Page 325 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

4. In the calendar, select a day, a week, or a month. The selected period is highlighted.
5. Click View Report.
6. When the report displays, click Data, Print, or Export Data for additional operations.

Page 326 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 13: Auditing for Revenue Assurance

Page 327 of 332 HP eIUM (8.0)


Chapter 14

Managing eIUM with HP OpenView


HP eIUM can be monitored and controlled with HP OpenView Operations. HP OpenView Operations
(OVO) is a distributed client-server software solution designed to help system administrators detect,
solve, and prevent problems occurring in networks, systems, and applications in any enterprise. OVO
is a scalable and flexible solution that can be configured to meet the requirements of any
information technology organization and its users. System administrators can expand the
applications of OVO by integrating management applications from OVO partners or other vendors.
Files provided with eIUM support OVO on HP-UX and Oracle Solaris (UNIX) platforms only. For more
information on HP OpenView Operations, see http://ovweb.external.hp.com/lpe/doc_serv.

For information on how to centralize log messages using HP OpenView, see the Logging chapter of
the HP eIUM Administrators Guide.

Methods of Managing eIUM 328

Activating OVO Management of eIUM on each eIUM Node 329

Making eIUM Log Files Available to OpenView Operations 329

Notifying OpenView When a Collector Starts or Stops 329

Activating OVO Management of eIUM on the OVO Management Console 330

Uploading OVO Configuration 330

Configuring eIUM Node Information into OVO 330

Distributing OVO Managed Node Information to Agents 331

Methods of Managing eIUM


eIUM provides OVO with the capability to monitor and manage eIUM deployments in the following
ways:

l Log file access: eIUM component (collectors, Admin Agent, Configuration Server) log file
messages are collected, processed, filtered and presented as events in the OVO Message
Browser.
l Process monitoring: eIUM components notify the OVO operator of termination or restart of eIUM
components. Corrective operator-initiated and automatic actions are available for specific
events.
l Administrator access: A special operator ium_op is created for access to eIUM management
functions and the eIUM Launchpad is placed in the OVO Application Bank, allowing access to eIUM
management functions to be restricted to specific operators.
HP eIUM-OVO integration uses the following mechanisms:

l Merging of eIUM log file information: In typical eIUM-based IP mediation systems, several eIUM
collectors run on the same host. The log file merger tool siulogmerger automatically discovers

Page 328 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 14: Managing eIUM with HP OpenView

which eIUM components are running on the host, opens these logs, converts log file formats, and
merges all appropriate information into a single file that can be read by the OVO agent running
on that host. The OVO agent then forwards the resulting events to the OVO Management
Console.
l HP eIUM collectors and configuration server run-time monitoring: The eIUM host admin agent
controls the run-time operations of all eIUM components (start, stop, restart, and so forth) and
monitors collector run-time status. The admin agent can execute an external script whenever
run-time status changes. Monitoring scripts are provided that notify OVO of eIUM component
status changes. These scripts run the OVO opcmsg command.
l HP eIUM Admin Agent monitoring: The Admin Agent is monitored separately via the iumadm_
chk.sh script and OVO templates.
l Pre-configured operator initiated actions: OVO template files include specific corrective actions
that can be performed by OVO/eIUM operators.
l eIUM application group. eIUM Application group contains just one application called Launchpad.
With Launchpad, the OVO/eIUM operator can control eIUM collectors (for example, start, stop,
restart, get status and statistics). This application runs the /opt/SIU/OV/guistart script which in
turn starts the eIUM LaunchPad at /opt/SIU/bin/launchpad. This script may be modified as
needed (such as specifying an alternate X Windows display host by exporting a DISPLAY variable
before the LaunchPad is executed).

Activating OVO Management of eIUM on each eIUM Node


Two configuration steps must be performed on each eIUM node that is to be monitored by OVO. The
log file information must be gathered and made available to the OVO agent running on the system
and the eIUM Admin Agent must be configured to notify OVO about status changes of eIUM
components.

A OVO agent must be running on the eIUM node for monitoring by OVO to take place (that is, the
eIUM node must already be a OVO-managed node).

Making eIUM Log Files Available to OpenView Operations


The siulogmerger utility combines eIUM log files into the format required by OpenView. It continually
monitors eIUM log files and adds new log messages to the merged log file. OVO templates provided
with eIUM instruct HP OpenView to read this new log file and include its contents as messages into
the OVO Message Browser.

To start siulogmerger and create the proper log file, use the following command:
/opt/SIU/bin/SIUJava com.hp.siu.tools.siulogmerger -o
/var/opt/SIU/log/IUMMonitor.log

You should add this command to a local boot script so it is started after all eIUM components each
time the system boots. When adding to a boot script, we also recommend using the nohup
command to start it by preceding the above command with nohup (see the nohup man page on
UNIX).

Notifying OpenView When a Collector Starts or Stops


Use the NotifyCommand attribute of the host admin agent to specify a program to be invoked to
notify OVO of a status change of an eIUM component.

Page 329 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 14: Managing eIUM with HP OpenView

1. Copy the appropriate version of the Perl script process.pl to a directory where local system
tools are kept. The HP-UX version is /opt/SIU/OV/hpux.bin/process.pl, the Oracle Solaris version
is /opt/SIU/OV/solaris.bin/process.pl. This script requires Perl be installed on the system. Make
sure the first line of the script refers to the correct location of Perl on this system.
2. Run the eIUM Launchpad and log in if security is installed.
3. Double click on Deployment in the deployment pane to display the hosts.
4. Select the host to be configured.
5. Select the Configure tab.
6. Click on the Edit Host Startup Configuration button. The LaunchPad displays all the parameters
used whenever the host admin agent starts.
7. Select Admin Agent in the left-hand pane.
8. Check the Enable Notify Command check box.
9. Type in the complete path name of the process.pl script (where you installed it, do not run it
directly from /opt/SIU/OV) that you want invoked by the admin agent.
10. Click the OK button.

Activating OVO Management of eIUM on the OVO Management


Console
The remaining steps required for OVO to monitor eIUM are all performed on the OVO Management
Console. These additional steps involve uploading the OVO templates for eIUM into OVO, configuring
OVO, and distributing the information out to all OVO agents.

Note that the OVO Management Console is not required to be the eIUM Configuration Server or even
to be running any eIUM components. However, if eIUM has not been installed on the OVO
Management Console, you will need to copy the /opt/SIU/OV directory from a machine where eIUM is
installed in order to complete the configuration of OVO. If the OVO Management Console is running
eIUM components, then the steps in the previous section should be performed first, just as on all
other eIUM nodes.

Uploading OVO Configuration


eIUM includes OVO templates that describe the functions to OVO and OVO agents. These files must
be uploaded into OVO with the following command:
/opt/OV/bin/OpC/opccfgupld -add /opt/SIU/OV/ovo_config

Use -replace instead of -add if an upload has been performed previously.

Configuring eIUM Node Information into OVO


Nodes must be added to groups that are defined in eIUM template files so OVO knows which nodes
to monitor. Add the nodes as follows:

Page 330 of 332 HP eIUM (8.0)


Foundation Guide
Chapter 14: Managing eIUM with HP OpenView

1. Run opc and login as opc_adm.


2. Add all hosts running eIUM components to the:
n OVO Node Bank (if not already there)

n eIUM Nodes group of the Node Hierarchy Bank


n eIUM Node Group of the Group Bank
3. Reconfigure the ium_op user in the User Bank as necessary for local requirements (at least set
a password).
4. Modify the Launchpad application in the eIUM application group:
n Change hostname to eIUM Configuration server

n Change root password

NOTE: If the eIUM Configuration Server is different from the OVO Management Console,
modify the /opt/SIU/OV/guistart script on the eIUM Configuration Server to set and export
an appropriate X Windows DISPLAY environment variable before launchpad is executed so
that Launchpad displays on the OVO console.

5. Assign the eIUM Template Group to all hosts running eIUM components using the pull down
menu Actions->Agents->Assign Templates.

Distributing OVO Managed Node Information to Agents


The final step is to distribute the configured information to all OVO agents so they can monitor their
respective eIUM nodes.

1. Copy the appropriate version of iumadm_chk.sh to the appropriate OVO directory for
distribution (/opt/SIU/OV/hpux.bin/iumadm_chk.sh should be copied to
/var/opt/OV/share/databases/OpC/mgd_node/customer/hp/pa-risc/hp-ux11/monitor for HP-
UX nodes and/or /opt/SIU/OV/solaris.bin/iumadm_chk.sh should be copied to
/var/opt/OV/share/databases/OpC/mgd_node/customer/sun/sparc/solaris/monitor for Oracle
Solaris nodes). Use the chmod command to set the mode of the files to 600 and use the
compress command to compress them.
2. Distribute OVO agent software and configuration to managed nodes using the pull down menu
Actions->Agents->Install/Update SW & Config.

Page 331 of 332 HP eIUM (8.0)

Vous aimerez peut-être aussi