Vous êtes sur la page 1sur 1521

Attunity Integration Suite

AIS User Guide and Reference Version 5.1


AIS5100

March 2008

AIS User Guide and Reference, Version 5.1 AIS5100 Copyright March, 2008, Attunity Ltd. All rights reserved. Primary Authors: David Goldman, Andre Liss, Jeanne Wiegelmann

Contributors: Yishai Hadas, Dror Harari, Tzachi Nissim, Adeeb Massad, Costi Zaboura, Sami Zeitoun, Gadi Farhat, Arie Kremer The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose. If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation and technical data, shall be subject to the licensing restrictions set forth in the applicable Attunit license agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial Computer SoftwareRestricted Rights (June 1987). Attunity Ltd., 70 jBlanchard Road, Burlington, MA 01803 The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim liability for any damages caused by such use of the Programs. Attunity is a registered trademark of Attunity Ltd and/or its affiliates. Other names may be trademarks of their respective owners. The Programs may provide links to Web sites and access to content, products, and services from third parties. Attunity is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated with the use of such content. If you choose to purchase any products or services from a third party, the relationship is directly between you and the third party. Attunity is not responsible for: (a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and warranty obligations related to purchased products or services. Attunity is not responsible for any loss or damage of any sort that you may incur from dealing with any third party.

Contents
Send Us Your Comments ......................................................................................................................... li Preface ................................................................................................................................................................ liii
Audience...................................................................................................................................................... Organization ............................................................................................................................................... Related Documentation ............................................................................................................................. Conventions ................................................................................................................................................ liii liv liv liv

Whats New....................................................................................................................................................... lix


Continuous CDC ....................................................................................................................................... Bulk Data Performance Improvements................................................................................................... 64-Bit Support for AIS Server on Windows............................................................................................ 64-Bit Thin ODBC Clients .......................................................................................................................... New ODBC DSN Setup Wizard ................................................................................................................ Enhancements for the ADO.NET Client .................................................................................................. Update Index Statistics Improvements .................................................................................................... Better Support for Oracle Stored Procedures ......................................................................................... Support for Exact Arithmetic.................................................................................................................... Improvements to Attunity Studio............................................................................................................ lix lix lix lx lx lx lx lxi lxi lxi

Part I 1

Getting Started with AIS

Introducing the Attunity Integration Suite


AIS Overview............................................................................................................................................ AIS Use....................................................................................................................................................... SQL Connectivity ............................................................................................................................... Application Connectivity.................................................................................................................. Adapters .............................................................................................................................................. Change Data Capture ........................................................................................................................ Attunity Integration Suite Supported Systems and Resources....................................................... Operating Systems ............................................................................................................................. Data Sources and Adapters .............................................................................................................. Interfaces ............................................................................................................................................. 1-1 1-2 1-2 1-2 1-3 1-3 1-3 1-3 1-4 1-4

Setting up Attunity Connect, Stream, and Federate


Overview.................................................................................................................................................... Opening Attunity Studio........................................................................................................................ Setting up Machines ................................................................................................................................ Using an Offline Design Machine to Create Attunity Definitions.............................................. Administration Authorization............................................................................................................... License Management ............................................................................................................................... Registering a Product ........................................................................................................................ Viewing License Information ........................................................................................................... Importing and Exporting XML in Attunity Studio............................................................................ 2-1 2-2 2-3 2-4 2-5 2-7 2-7 2-8 2-9

Binding Configuration
Binding Configuration Overview ......................................................................................................... 3-1 Server Binding .................................................................................................................................... 3-1 Client Binding..................................................................................................................................... 3-2 Setting up Bindings in Attunity Studio............................................................................................... 3-2 Adding Bindings ................................................................................................................................ 3-2 Editing Bindings................................................................................................................................. 3-3 Setting the Binding Environment in Attunity Studio............................................................ 3-3 Defining Remote Machines in a Binding................................................................................. 3-4 Binding Syntax.......................................................................................................................................... 3-6 <remoteMachines> Statement.......................................................................................................... 3-7 <remoteMachine> Statement ........................................................................................................... 3-7 <adapters> Statement........................................................................................................................ 3-8 <adapter> Statement ......................................................................................................................... 3-8 <config> Statement ..................................................................................................................... 3-8 <datasources> Statement .................................................................................................................. 3-9 <datasource> Statement.................................................................................................................... 3-9 <config> Statement .................................................................................................................. 3-11 Sample Binding ..................................................................................................................................... 3-11 Environment Properties ....................................................................................................................... 3-12 Debug................................................................................................................................................ 3-13 General.............................................................................................................................................. 3-14 Language .......................................................................................................................................... 3-15 Modeling .......................................................................................................................................... 3-16 ODBC ................................................................................................................................................ 3-17 OLE DB ............................................................................................................................................. 3-17 Optimizer ......................................................................................................................................... 3-18 Parallel Processing .......................................................................................................................... 3-20 Query Processor .............................................................................................................................. 3-20 Temp Features ................................................................................................................................. 3-22 Transaction....................................................................................................................................... 3-23 Tuning............................................................................................................................................... 3-23 XML................................................................................................................................................... 3-24 Languages ........................................................................................................................................ 3-25 ARA (Arabic) ............................................................................................................................ 3-25 ENG (English)........................................................................................................................... 3-26

vi

FR (French)................................................................................................................................ GER (German) .......................................................................................................................... GREEK (Greek)......................................................................................................................... HEB (Hebrew) .......................................................................................................................... JPN (Japanese) .......................................................................................................................... KOR (Korean) ........................................................................................................................... SCHI (Simple Chinese)............................................................................................................ SPA (Spanish) ........................................................................................................................... TCHI (Traditional Chinese).................................................................................................... TUR (Turkish)........................................................................................................................... Sample Environment Properties ...................................................................................................

3-26 3-26 3-26 3-26 3-26 3-26 3-26 3-27 3-27 3-27 3-27

Setting up Daemons
Daemons..................................................................................................................................................... 4-1 Defining Daemons at Design Time ...................................................................................................... 4-1 Adding a Daemon.............................................................................................................................. 4-2 Editing a Daemon............................................................................................................................... 4-3 Control .......................................................................................................................................... 4-3 Logging......................................................................................................................................... 4-5 Security ......................................................................................................................................... 4-8 Administering Selected User Only Lists.................................................................................. 4-9 Reloading Daemon Configurations at Runtime ............................................................................. 4-10 Editing Daemon Configurations................................................................................................... 4-10 Checking the Daemon Status.............................................................................................................. 4-10 Checking the Daemon Status with Attunity Studio................................................................... 4-10 Starting and Stopping Daemons ........................................................................................................ 4-11 Starting a Daemon in Attunity Studio ......................................................................................... 4-11 Shutting Down a Daemon in Attunity Studio ............................................................................ 4-11 Sample Daemon Configuration.......................................................................................................... 4-11 Adding and Editing Workspaces........................................................................................................ 4-12 Adding a Workspace ...................................................................................................................... 4-12 Editing a Workspace....................................................................................................................... 4-15 General....................................................................................................................................... 4-16 Server Mode.............................................................................................................................. 4-19 Security ...................................................................................................................................... 4-22 Selecting a Binding Configuration................................................................................................ 4-24 Disabling a Workspace................................................................................................................... 4-25 Setting Workspace Authorization ................................................................................................ 4-25

Managing Metadata
Data Source Metadata Overview........................................................................................................... Importing Metadata ................................................................................................................................. Importing Metadata Using an Attunity Studio Import Wizard.................................................. Importing Metadata Using a Standalone Utility ........................................................................... Managing Metadata ................................................................................................................................. Using Attunity Metadata with AIS Supported Data Sources.......................................................... 5-1 5-2 5-2 5-3 5-3 5-4

vii

Extended Native Data Source Metadata......................................................................................... 5-4 Native Metadata Caching ................................................................................................................. 5-4 Procedure Metadata Overview .............................................................................................................. 5-5 Importing Procedure Metadata Using the Import Wizard ............................................................... 5-5 Procedure Metadata Statements ............................................................................................................ 5-6 The <procedure> Statement ............................................................................................................. 5-6 Syntax ........................................................................................................................................... 5-6 <procedure> Attributes ............................................................................................................. 5-7 The <parameters> Statement............................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <dbCommand> Statement ........................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <fields> Statement...................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 The <field> Statement ....................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 <field> Attributes..................................................................................................................... 5-10 The <group> Statement.................................................................................................................. 5-10 Syntax ........................................................................................................................................ 5-10 <group> Attributes.................................................................................................................. 5-11 The <variant> Statement................................................................................................................ 5-11 Variant without selector.......................................................................................................... 5-11 Variant with selector................................................................................................................ 5-12 ADD Syntax .............................................................................................................................. 5-13 Usage Notes .............................................................................................................................. 5-13 Resolving Variants in Attunity Studio.................................................................................. 5-13 The <case> Statement ..................................................................................................................... 5-14 Syntax ........................................................................................................................................ 5-14 <case> Attributes ..................................................................................................................... 5-14 ADD Supported Data Types ............................................................................................................... 5-15 ADD Syntax............................................................................................................................................ 5-23 The <table> Statement.................................................................................................................... 5-24 Syntax ........................................................................................................................................ 5-24 Table Attributes........................................................................................................................ 5-25 The <dbCommand> Statement ..................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-29 Examples ................................................................................................................................... 5-29 The <fields> Statement................................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-30 The <field> Statement .................................................................................................................... 5-30 Syntax ........................................................................................................................................ 5-30 Example ..................................................................................................................................... 5-30 Field Attributes......................................................................................................................... 5-30 The <group> Statement.................................................................................................................. 5-35 Syntax ........................................................................................................................................ 5-35 Example ..................................................................................................................................... 5-35 Group Attributes...................................................................................................................... 5-35

viii

The <variant> Statement................................................................................................................ Variant without selector.......................................................................................................... Variant with selector................................................................................................................ Usage Notes .............................................................................................................................. Resolving Variants in Attunity Studio.................................................................................. Variant Attributes .................................................................................................................... The <case> Statement ..................................................................................................................... Syntax ........................................................................................................................................ Case Attributes ......................................................................................................................... The <keys> Statement .................................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... The <key> Statement ...................................................................................................................... Syntax ........................................................................................................................................ Key Attributes .......................................................................................................................... The <segments> Statement............................................................................................................ Syntax ........................................................................................................................................ The <segment> Statement.............................................................................................................. Syntax ........................................................................................................................................ Segment Attributes .................................................................................................................. The <foreignKeys> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <foreignKey> Statement......................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... foreignKey Attributes.............................................................................................................. The <primaryKey> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <pKeySegments> Statement .................................................................................................. Syntax ........................................................................................................................................ Example ..................................................................................................................................... pKeySegment Attributes.........................................................................................................

5-38 5-38 5-39 5-40 5-40 5-40 5-41 5-41 5-41 5-42 5-42 5-43 5-43 5-43 5-43 5-46 5-46 5-46 5-46 5-46 5-47 5-47 5-47 5-48 5-48 5-48 5-49 5-49 5-49 5-49 5-49 5-50

Working with Metadata in Attunity Studio


Overview.................................................................................................................................................... 6-1 Managing Data Source Metadata .......................................................................................................... 6-1 General Tab ......................................................................................................................................... 6-2 Columns Tab....................................................................................................................................... 6-4 Column Definition Section ........................................................................................................ 6-5 Column Properties...................................................................................................................... 6-6 Indexes Tab ......................................................................................................................................... 6-8 Table Information ....................................................................................................................... 6-8 Properties ..................................................................................................................................... 6-9 Statistics Tab ....................................................................................................................................... 6-9 Table........................................................................................................................................... 6-10 Columns .................................................................................................................................... 6-10 Indexes....................................................................................................................................... 6-11

ix

Update Button .......................................................................................................................... Modelling Tab.................................................................................................................................. Importing Data Source Metadata with the Attunity Import Wizard .......................................... Starting the Import Process............................................................................................................ Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation ...................................................................................................................... Import Manipulation Screen .................................................................................................. Field Manipulation Screen...................................................................................................... Metadata Model Selection.............................................................................................................. Import the Metadata ....................................................................................................................... Working with Application Adapter Metadata................................................................................. Adapter Metadata General Properties ......................................................................................... Adapter Metadata Schema Records ............................................................................................. Editing an Existing Schema Definition ................................................................................. Adapter Metadata Interactions ..................................................................................................... Editing an Existing Interaction .............................................................................................. Interaction Advanced Tab ...................................................................................................... Working with Procedure Metadata .................................................................................................... Manually Creating Procedure Metadata ..................................................................................... Managing Procedure Metadata..................................................................................................... Importing Procedure Metadata.....................................................................................................

6-11 6-12 6-14 6-14 6-15 6-17 6-19 6-19 6-20 6-21 6-27 6-29 6-30 6-31 6-32 6-34 6-34 6-35 6-36 6-36 6-37 6-37 6-37

Handling Arrays
Overview of Handling Arrays ............................................................................................................... 7-1 Representing Metadata ........................................................................................................................... 7-1 Methods of Handling Arrays ................................................................................................................. 7-5 Columnwise Normalization ............................................................................................................. 7-5 Virtual Tables...................................................................................................................................... 7-7 Virtual Views ................................................................................................................................... 7-10 Sequential Flattening (Bulk Load of Array Data)....................................................................... 7-11 ADO/OLE DB Chapters ................................................................................................................ 7-14 Chapter Handling in Query and Database Adapters ......................................................... 7-16 XML................................................................................................................................................... 7-17

Using SQL
Overview of Using SQL .......................................................................................................................... Batching SQL Statements ....................................................................................................................... Hierarchical Queries ................................................................................................................................ Generating Hierarchical Results Using SQL .................................................................................. Accessing Hierarchical Data Using SQL......................................................................................... Examples ...................................................................................................................................... Flattening Hierarchical Data Using SQL ........................................................................................ Using an Alias ............................................................................................................................. Examples ...................................................................................................................................... Using Virtual Tables to Represent Hierarchical Data................................................................... 8-1 8-1 8-2 8-3 8-4 8-4 8-5 8-6 8-6 8-9

Creating Virtual Tables ........................................................................................................... Hierarchical Queries From an Application ................................................................................. Drill-down Operations in an ADO Application.................................................................. Drill-down Operations in an ODBC Application................................................................ ODBC Drill-down Operations Using RDO .......................................................................... ODBC Drill-down Operations Using C ................................................................................ Drill-down Operations in a Java Application...................................................................... Copying Data From One Table to Another ...................................................................................... Passthru SQL.......................................................................................................................................... For a Specific SQL Statement......................................................................................................... Via ADO .................................................................................................................................... Via RDO and DAO .................................................................................................................. For all SQL During a Session......................................................................................................... Via ADO/OLE DB ................................................................................................................... Via ODBC .................................................................................................................................. Passthru Queries as Part of an SQL Statement ........................................................................... Writing Queries Using SQL ................................................................................................................ Writing Efficient SQL ..................................................................................................................... Locking Considerations ....................................................................................................................... Locking Modes ................................................................................................................................ Optimistic Locking .................................................................................................................. Pessimistic Locking.................................................................................................................. No Locking................................................................................................................................ ODBC Locking Considerations ..................................................................................................... ADO Locking Considerations ....................................................................................................... Managing the Execution of Queries over Large Tables ................................................................. Optimizing Outer Joins........................................................................................................................ Limitations ....................................................................................................................................... Query Optimization........................................................................................................................ Property ............................................................................................................................................ Changing the Property............................................................................................................

8-10 8-10 8-11 8-11 8-12 8-14 8-16 8-17 8-17 8-18 8-18 8-19 8-20 8-20 8-20 8-21 8-22 8-22 8-23 8-23 8-23 8-23 8-23 8-23 8-24 8-24 8-25 8-26 8-26 8-26 8-26

9 Working with Web Services


Web Services Overview........................................................................................................................... Preparing to use Web Services............................................................................................................... Web Services Prerequisites ............................................................................................................... Setting up Attunity Studio to Work with Web Services............................................................... Deploying an Adapter as a Web Service.............................................................................................. Connection Information for the Axis Servlet ................................................................................. Define a new Web Service for an Adapter ..................................................................................... The General Tab .......................................................................................................................... The Pooling Tab .......................................................................................................................... The Map Tab ................................................................................................................................ Select the Interactions ........................................................................................................................ Summary Window............................................................................................................................. Undeploying Web Services .................................................................................................................... Viewing Web Services ............................................................................................................................. 9-1 9-1 9-1 9-1 9-2 9-3 9-3 9-5 9-5 9-6 9-7 9-8 9-9 9-9

xi

Logging Web Service Activities.......................................................................................................... Changing the Log File Location .................................................................................................... Changing the Error Message Level .............................................................................................. Changing the Error Message Format ...........................................................................................

9-10 9-10 9-10 9-11

Part II 10

Attunity Connect

Introduction to Attunity Connect


Overview of Attunity Connect ........................................................................................................... Logical Architecture .............................................................................................................................. System Components and Concepts ................................................................................................... Data Engine...................................................................................................................................... Query Optimizer ...................................................................................................................... Data Sources ............................................................................................................................. Interfaces and APIs .................................................................................................................. Transaction Support ................................................................................................................ Application Engine ......................................................................................................................... Application Adapters .............................................................................................................. Interfaces and APIs .................................................................................................................. Events......................................................................................................................................... Attunity Server ................................................................................................................................ Attunity Studio ................................................................................................................................ Design Time .............................................................................................................................. Runtime ..................................................................................................................................... Metadata Repository....................................................................................................................... System Repository ................................................................................................................... Data Source Repositories ........................................................................................................ Attunity Configuration Model ...................................................................................................... Daemons.................................................................................................................................... Client Communication Software ........................................................................................... Server Communication Software........................................................................................... 10-1 10-1 10-1 10-2 10-2 10-3 10-4 10-4 10-5 10-5 10-5 10-6 10-6 10-6 10-6 10-6 10-7 10-7 10-7 10-7 10-7 10-8 10-8

11

Attunity Integration Suite Architecture Flows


Overview................................................................................................................................................. 11-1 Data Source Architecture ..................................................................................................................... 11-1 Data Source ...................................................................................................................................... 11-2 Query Engine Flow ......................................................................................................................... 11-3 Making a Request between Two Relational Databases...................................................... 11-4 Making a Request between Two Non-Relational Databases............................................. 11-5 Making a Request between a Relational Data Source and a Non-Relational Data Source ...... 11-5 Application Adapter ............................................................................................................................. 11-6 Change Data Capture (CDC) Flow..................................................................................................... 11-7 Database and Query Adapter.............................................................................................................. 11-8

xii

12

Implementing a Data Access Solution


Overview................................................................................................................................................. Setting Up AIS for Data Access.......................................................................................................... Installing AIS ......................................................................................................................................... Install AIS on the Backend ............................................................................................................. Install the Attunity Server Software............................................................................................. Install Attunity Studio .................................................................................................................... Configuring the System for Data Access (Using Studio) .............................................................. Configure the Machines ................................................................................................................. Configure User Profiles .................................................................................................................. Configure the Binding .................................................................................................................... Configure the Data Sources in the Binding ................................................................................. Set Up the Data Source Metadata ................................................................................................. Supported Interfaces............................................................................................................................. Data Access Flow ................................................................................................................................... Data Source Metadata........................................................................................................................... Configuring the System for Data Access Using XML.................................................................... Set up the Environment in Attunity Studio ................................................................................ Expose the XML Field..................................................................................................................... Set up the Data Source and Import the Metadata ...................................................................... Prepare an Input XML Structure .................................................................................................. Setting up the Query and Database Adapter for XML Operations ............................................. Using the Query Adapter for XML Operations.......................................................................... The Input XML Record................................................................................................................... 12-1 12-2 12-2 12-2 12-2 12-2 12-3 12-3 12-3 12-3 12-3 12-3 12-4 12-5 12-5 12-6 12-6 12-6 12-7 12-7 12-7 12-7 12-8

13

Setting up Data Sources and Events with Attunity Studio


Data Sources ........................................................................................................................................... Adding Data Sources ...................................................................................................................... Configuring Data Source Advanced Properties ......................................................................... Testing a Data Source ..................................................................................................................... Creating a Data Source Shortcut (Optional)................................................................................ Testing Data Source Shortcuts (Optional) ................................................................................... Events....................................................................................................................................................... Adding Event Queues .................................................................................................................... Defining Metadata for Event Queues........................................................................................... 13-1 13-1 13-2 13-4 13-4 13-7 13-8 13-8 13-9

14

Procedure Data Sources


Procedure Data Sources Overview..................................................................................................... Configuring the Procedure Data Source........................................................................................... Adding Procedure Data Sources................................................................................................... Configuring Data Source Advanced Properties .................................................................. Defining a Shortcut to a Procedure on Another Machine......................................................... Defining the Procedure Metadata................................................................................................. 14-1 14-1 14-2 14-3 14-5 14-6

xiii

15

Implementing an Application Access Solution


Overview................................................................................................................................................. Setting up AIS for Application Access ............................................................................................. Installing System Components ..................................................................................................... Configuring the System for Application Access (Using Studio).............................................. Supported APIs...................................................................................................................................... Application Access Flow...................................................................................................................... Defining the Application Adapter..................................................................................................... ACX Protocol .......................................................................................................................................... Transaction Support.............................................................................................................................. Generic and Custom Adapters............................................................................................................ Developing an Application Adapter in AIS................................................................................ MyApp: An Application Adapter Example......................................................................... 15-1 15-2 15-2 15-3 15-3 15-4 15-4 15-5 15-6 15-6 15-7 15-7

16

Setting Up Adapters
Setting up Adapters Overview ........................................................................................................... Working with Adapters........................................................................................................................ Adding Application Adapters....................................................................................................... Configuring Application Adapters .............................................................................................. Testing Application Adapters ....................................................................................................... 16-1 16-1 16-1 16-2 16-3

17

Application Adapter Definition


Overview................................................................................................................................................. The adapter Element ............................................................................................................................. The interaction Element ....................................................................................................................... The schema Element ............................................................................................................................. The enumeration Element ................................................................................................................... The record Element ............................................................................................................................... Defining Hierarchies....................................................................................................................... The variant record Element ................................................................................................................. Variant without Selector ................................................................................................................ Variant with Selector ...................................................................................................................... The field Element .................................................................................................................................. 17-1 17-2 17-3 17-4 17-5 17-5 17-6 17-6 17-6 17-7 17-7

Part III 18

Attunity Stream

What is the Attunity Stream CDC Solution


CDC Solution Overview ...................................................................................................................... The Attunity Stream CDC Architecture............................................................................................ The Staging Area ............................................................................................................................. Handling Before and After Images............................................................................................... Tracking Changes - Auditing ........................................................................................................ Security Considerations ................................................................................................................. What Can Be Captured? ....................................................................................................................... 18-1 18-2 18-4 18-4 18-5 18-5 18-5

xiv

19

Implementing a Change Data Capture Solution


Overview................................................................................................................................................. Setting up AIS to Create a Change Data Capture ........................................................................... Installing System Components ..................................................................................................... Configuring the System for Change Data Captures (Using Studio)........................................ Generated CDC Components........................................................................................................ Handling Arrays Defined in the Source Data ............................................................................. CDC System Architecture ................................................................................................................... CDC Adapter Definition...................................................................................................................... CDC Agent Metadata Definition Description............................................................................. Interactions................................................................................................................................ Schema....................................................................................................................................... CDC Streams .......................................................................................................................................... Transaction Support.............................................................................................................................. Troubleshooting..................................................................................................................................... 19-1 19-2 19-2 19-2 19-3 19-3 19-4 19-5 19-6 19-6 19-6 19-6 19-7 19-8

20

SQL-Based CDC Methodologies


Overview................................................................................................................................................. Components ..................................................................................................................................... CDC Agent................................................................................................................................ SQL-based Change Router ..................................................................................................... Staging Area Server ................................................................................................................. Configuration Parameters.................................................................................................................... SQL Access to Change Events............................................................................................................. Change Tables.................................................................................................................................. The STREAM_POSITION Table ................................................................................................... Reading the Change Tables ................................................................................................................. Reading Change Tables Continuously......................................................................................... Referential Integrity Considerations............................................................................................... Monitoring the Change Data Capture............................................................................................. Service Context Table ................................................................................................................... Monitoring the Status ................................................................................................................... Error Handling ..................................................................................................................................... Determining the Change Router Error Behavior...................................................................... CONTROL_TABLE....................................................................................................................... subscribeAgentLog Configuration Properties............................................................................... Performance Considerations ............................................................................................................. Memory Parameters...................................................................................................................... Latency............................................................................................................................................ Capacity Planning ............................................................................................................................... Storage ............................................................................................................................................ Memory .......................................................................................................................................... Network.......................................................................................................................................... Processing....................................................................................................................................... Applying Metadata Changes ............................................................................................................ Design-time Metadata Change Procedure ................................................................................ 20-1 20-2 20-2 20-3 20-3 20-3 20-4 20-5 20-7 20-7 20-8 20-11 20-13 20-13 20-14 20-16 20-16 20-16 20-17 20-17 20-17 20-18 20-18 20-18 20-19 20-19 20-20 20-20 20-21

xv

Production Metadata Change Procedure .................................................................................. 20-21 Migration from XML-based CDC .................................................................................................... 20-21

21

Creating a CDC with the Solution Perspective


Using the Solution Perspective........................................................................................................... Using Views in the Solution Perspective ..................................................................................... Getting Started Guide .......................................................................................................................... Creating a New Project................................................................................................................... Opening an Existing Project .......................................................................................................... Opening Recent Projects ................................................................................................................ Project Guide.......................................................................................................................................... Design Wizard ................................................................................................................................. Implementation Guide ................................................................................................................... Machine ..................................................................................................................................... Data Source ............................................................................................................................. Metadata.................................................................................................................................. CDC Service ............................................................................................................................ Access Service Manager ....................................................................................................... Stream Service ........................................................................................................................ Deployment Guide........................................................................................................................ Troubleshooting................................................................................................................................... 21-1 21-1 21-2 21-2 21-4 21-5 21-5 21-5 21-8 21-9 21-10 21-10 21-19 21-20 21-23 21-30 21-35

Part IV 22

Attunity Federate

What is Attunity Federate


Overview................................................................................................................................................. The Data Engine .................................................................................................................................... Query Optimizer ............................................................................................................................. Performance Tuning Tools............................................................................................................. Front End APIs ................................................................................................................................ Data Source Drivers ........................................................................................................................ Base Services .......................................................................................................................................... Attunity Integration Suite .............................................................................................................. AIS Installation Wizards ......................................................................................................... Attunity Internal Storage (The Repository)................................................................................. The System Repository............................................................................................................ Data Source Repository........................................................................................................... Attunity Studio: Configuration and Management GUI ........................................................... Client/Server Communication and the Attunity Daemon ....................................................... Daemons.................................................................................................................................... Client Communication Software ........................................................................................... Server Communication Software........................................................................................... 22-1 22-1 22-2 22-2 22-2 22-3 22-4 22-4 22-4 22-4 22-4 22-4 22-5 22-5 22-5 22-5 22-6

23

Using a Virtual Database


Virtual Database Overview ................................................................................................................. 23-1 Defining a Virtual Data Base .............................................................................................................. 23-2

xvi

Metadata Considerations ..................................................................................................................... Defining Tables ..................................................................................................................................... Creating Synonyms............................................................................................................................... Defining Stored Procedures ................................................................................................................ Creating Views....................................................................................................................................... Using a Virtual Database .....................................................................................................................

23-3 23-3 23-3 23-4 23-6 23-8

24

Segmented Data Sources


Overview................................................................................................................................................. Creating Segmented Data Sources..................................................................................................... Adding a Data Source to a Binding .............................................................................................. Creating a Data Source Shortcut ................................................................................................... Adding a Segmented Data Source to a Binding ......................................................................... Environmental Properties for Segmented Data Sources ............................................................... Using a Segmented Data Source ........................................................................................................ 24-1 24-1 24-1 24-2 24-3 24-4 24-4

Part V 25

Attunity Studio

Working with the Attunity Studio Workbench


Workbench Overview........................................................................................................................... Using Workbench Parts........................................................................................................................ Welcome Screen............................................................................................................................... Main Screen...................................................................................................................................... Main Menu Bar ................................................................................................................................ Working with Perspectives............................................................................................................ Solution Perspective ................................................................................................................ Design Perspective................................................................................................................... Runtime Manager Perspective ............................................................................................... Selecting a Perspective ............................................................................................................ Working with Views....................................................................................................................... Customizing Views.................................................................................................................. Getting Started View ............................................................................................................... Error Log View ......................................................................................................................... Configuration View ................................................................................................................. Metadata View ......................................................................................................................... Workbench Icons ................................................................................................................................... General.............................................................................................................................................. Actions ............................................................................................................................................ Objects............................................................................................................................................. Manipulation ................................................................................................................................. Setting Attunity Studio Preferences................................................................................................ Studio .............................................................................................................................................. Configuration ......................................................................................................................... Metadata.................................................................................................................................. Runtime Manager .................................................................................................................. Keys ................................................................................................................................................. 25-1 25-2 25-2 25-3 25-4 25-4 25-4 25-4 25-5 25-5 25-5 25-6 25-6 25-7 25-8 25-8 25-9 25-9 25-10 25-12 25-15 25-16 25-16 25-18 25-19 25-20 25-21

xvii

Default Keyboard Shortcuts ................................................................................................. 25-23

Part VI 26

Operation and Maintenance

AIS Runtime Tasks from the Command Line


Overview................................................................................................................................................. 26-1 Starting and Stopping Daemons ........................................................................................................ 26-1 Starting a Daemon........................................................................................................................... 26-1 Enabling Automatic Startup................................................................................................... 26-1 Manually Starting a Daemon on HP NonStop, OpenVMS, OS/400, UNIX, and Windows Platforms 26-2 Manually Starting a Daemon on z/OS Systems.................................................................. 26-4 Starting Multiple Daemons ................................................................................................... 26-5 Stopping a Daemon......................................................................................................................... 26-5 Shutting Down a Daemon Using Attunity Studio .............................................................. 26-5 Shutting Down a Daemon Using the Command Line........................................................ 26-6 Disabling a Workspace............................................................................................................ 26-7 Checking the Daemon .................................................................................................................... 26-7 Managing Daemon Configurations................................................................................................... 26-8 Daemon Configuration Groups .................................................................................................... 26-8 Adding and Editing Daemon Configurations ............................................................................ 26-9 Adding and Editing Workspaces.................................................................................................. 26-9 Configuring Logging ...................................................................................................................... 26-9

27

Runtime Management with Attunity Studio


Overview................................................................................................................................................. Runtime Explorer View........................................................................................................................ Runtime Explorer Tasks ................................................................................................................. Adding a Daemon.................................................................................................................... Daemon Tasks .......................................................................................................................... Workspace Tasks...................................................................................................................... Server Tasks .............................................................................................................................. Viewing Logs ................................................................................................................................... Working with the Event Monitor .......................................................................................... Viewing Events................................................................................................................................ Daemon Properties ....................................................................................................................... Error Log View ..................................................................................................................................... Displaying the Error Log View ................................................................................................... Error Log View Tasks ................................................................................................................... Clearing the Error Log.................................................................................................................. Deleting the Error Log.................................................................................................................. Opening a Log File........................................................................................................................ Restoring the Error Log................................................................................................................ Exporting Errors to a Log File ..................................................................................................... Importing a Log File ..................................................................................................................... Viewing the Event Details ........................................................................................................... 27-1 27-1 27-2 27-2 27-4 27-5 27-5 27-5 27-7 27-8 27-10 27-12 27-12 27-12 27-13 27-13 27-13 27-14 27-14 27-14 27-14

xviii

28

Managing Security
Overview of Attunity Security ........................................................................................................... Managing Design Time Security........................................................................................................ Local Access to AIS Design-Time Resources .............................................................................. Remote Access to AIS Design-Time Resources .......................................................................... Design Roles ............................................................................................................................. Assigning Design Roles .......................................................................................................... Password Handling in Attunity Studio ....................................................................................... Setting Up the Password Caching Policy in Attunity Studio............................................ Assigning Authorization Rights to a Workspace................................................................ Setting Up a Master Password for a User............................................................................. Managing Runtime Security ............................................................................................................... User Profiles ..................................................................................................................................... Setting a Master Password for a User Profile ...................................................................... Using a Client User Password................................................................................................ Using a Server User Profile..................................................................................................... Managing a User Profile in Attunity Studio ............................................................................... Adding a User .......................................................................................................................... Add Authenticators ............................................................................................................... Add Encryption Keys ............................................................................................................ Editing a User Profile ............................................................................................................ Remove an Authenticator or Encryption key .................................................................... Client Authentication ................................................................................................................... Client Authentication for Thin Clients ............................................................................... Client Authentication for Fat Clients .................................................................................. Client Authorization and Access Restriction ............................................................................ Restricting Access to a User by Login User Name............................................................ Restricting Access to Data with a Virtual Database.......................................................... Transport Encryption ................................................................................................................... Encrypting Network Communications...................................................................................... Setting a Client Encryption Protocol ................................................................................... Configuring Encrypted Communication............................................................................ Configuring the Encryption Key on the Server Machine................................................. Firewall Support............................................................................................................................ Accessing a Server through a Firewall....................................................................................... Selecting a Port Range for Workspace Servers .................................................................. Accessing a Server Using Fixed NAT ................................................................................. Dynamic Credentials .................................................................................................................... Providing Credentials in the Connection String ............................................................... Interactively Prompting for Credentials............................................................................. Setting Up Impersonation............................................................................................................ Setting Up Impersonation for DB2 ...................................................................................... Granting Daemon Administration Rights to Users ................................................................. Granting Workspace Administration Rights to Users............................................................. 28-1 28-1 28-1 28-2 28-2 28-2 28-3 28-3 28-4 28-4 28-5 28-6 28-6 28-7 28-7 28-9 28-9 28-11 28-12 28-13 28-14 28-14 28-15 28-15 28-15 28-15 28-17 28-17 28-18 28-18 28-19 28-20 28-22 28-23 28-24 28-24 28-25 28-25 28-26 28-26 28-28 28-28 28-30

xix

29

Backing Up AIS
Overview of the AIS Backup Process................................................................................................ Backing Up and Restoring AIS Server Installation........................................................................ Backing Up and Restoring AIS Server Metadata............................................................................ Backing Up and Restoring AIS Server Scripts ................................................................................ Backing Up and Restoring AIS Server Data .................................................................................... Backing Up and Restoring AIS Studio Metadata ........................................................................... 29-1 29-1 29-2 29-3 29-3 29-3

30

Transaction Support
Overview................................................................................................................................................. Using Attunity Connect as a Stand-alone Transaction Coordinator ........................................... Attunity Connect Data Source Driver Capabilities........................................................................ Data Sources That Do Not Support Transactions....................................................................... Data Sources with One-Phase Commit Capability .................................................................... Data Sources with Two-Phase Commit Capability.................................................................... Relational Database Procedures.................................................................................................... Distributed Transactions...................................................................................................................... Transaction Log File........................................................................................................................ CommitConfirm Table ................................................................................................................... Recovery .................................................................................................................................................. Recovery Utility Toolbar ................................................................................................................ Platform Specific Information ............................................................................................................ 30-1 30-2 30-3 30-3 30-3 30-4 30-4 30-4 30-5 30-6 30-7 30-9 30-9

31

Troubleshooting in AIS
Troubleshooting Overview.................................................................................................................. Product Flow Maps ............................................................................................................................... Local Data Access Scenario............................................................................................................ Remote Data Access Scenario........................................................................................................ Using the Product Flow Maps for Troubleshooting ....................................................................... SQL Application/SQL API Issues (A1, B1) ................................................................................. ADO/OLEDB ........................................................................................................................... JDBC........................................................................................................................................... SQL API Issues/Query Processor Issues (A2, B2F) ................................................................... Troubleshooting Methods ................................................................................................................... Using the NAV_UTIL CHECK SERVER Utility ......................................................................... Using the NAV_UTIL CHECK DATASOURCE Utility ............................................................ Using Trace Log Files ..................................................................................................................... Log Traces ................................................................................................................................. Using Extended Logging Options ................................................................................................ New Log Entries.............................................................................................................................. Identifying Nodes ......................................................................................................................... Configuring the Optimizer Trace File................................................................................. Reading the Log............................................................................................................................. Configuring the Extended Logging Option ....................................................................... Configuring Advanced Environment Parameters ................................................................... Common Errors and Solutions ......................................................................................................... 31-1 31-1 31-2 31-3 31-5 31-5 31-5 31-6 31-6 31-6 31-6 31-7 31-8 31-8 31-9 31-9 31-10 31-10 31-11 31-11 31-12 31-13

xx

Part VII 32

Utilities

Using Attunity SQL Utility


Overview................................................................................................................................................. Using the SQL Utility ........................................................................................................................... Connecting to an Attunity Server via the SQL Utility................................................................... Specifying and Executing Queries..................................................................................................... Working with Parameterized Queries ......................................................................................... Modifying Data in a Recordset........................................................................................................... Working with Chapters ........................................................................................................................ Modifying Chapters........................................................................................................................ Specifying Recordset Properties ................................................................................................... Working with Schemas .................................................................................................................. 32-1 32-1 32-2 32-2 32-3 32-4 32-4 32-5 32-5 32-7

33

Using the Attunity XML Utility


Overview................................................................................................................................................. Connecting with XML .......................................................................................................................... Connect Properties.......................................................................................................................... Connect Commands ....................................................................................................................... Metadata Browser .................................................................................................................... Events Listener ......................................................................................................................... Set Encryption .......................................................................................................................... Executing with XML ............................................................................................................................. Creating an XML Request .............................................................................................................. XML Input................................................................................................................................. XML Output.............................................................................................................................. 33-1 33-2 33-2 33-3 33-4 33-5 33-6 33-6 33-7 33-7 33-8

34

Attunity Query Tool


Query Tool Overview ........................................................................................................................... Getting Started with the Query Tool ................................................................................................. Where You Can Execute Queries.................................................................................................. Opening the Query Tool ................................................................................................................ The Query Builder Interface .......................................................................................................... Using the Query Builder ...................................................................................................................... The Tables Tab................................................................................................................................. The Columns Tab ............................................................................................................................ The Where Tab................................................................................................................................. The Group Tab................................................................................................................................. The Having Tab ............................................................................................................................. The Sort Tab ................................................................................................................................... Creating a Query Manually............................................................................................................... Managing Queries............................................................................................................................... Using the Query Tool with Transactions ........................................................................................ 34-1 34-1 34-2 34-2 34-3 34-4 34-5 34-6 34-7 34-9 34-10 34-12 34-13 34-14 34-16

xxi

35 SQL Explain Utility


SQL Explain Utility Overview............................................................................................................ Activating the SQL Explain Utility.................................................................................................... Using the Explain Command in NAV_UTIL .............................................................................. Using the prepareSQL Interaction in the Query Adapter......................................................... Automatically Generate the File Using the analyzerQueryPlan Property ............................. XML Output ........................................................................................................................................... Tree Node Verbs.............................................................................................................................. Other Verb Types ............................................................................................................................ 35-1 35-1 35-1 35-2 35-3 35-4 35-4 35-5

36

Using Attunity Query Analyzer Utility


Overview................................................................................................................................................. Using the Query Analyzer ................................................................................................................... Viewing the Execution Plan for an SQL Query............................................................................... Generating a Plan for Every SQL Statement.................................................................................... The SQL Statement Plan...................................................................................................................... The Query Analyzer Toolbar............................................................................................................... The Query Analyzer Icons................................................................................................................... Working with an Optimization Plan ................................................................................................. 36-1 36-1 36-2 36-2 36-2 36-3 36-4 36-5

37

Using NAV_UTIL Utility


Overview................................................................................................................................................. Using the NAV_UTIL Command Line Utility ................................................................................. Running NAV_UTIL....................................................................................................................... Basic NAV_UTIL Syntax......................................................................................................... Activating NAV_UTIL ............................................................................................................ Running NAV_UTIL from a Shell Environment................................................................. Running NAV_UTIL on a Java Machine .............................................................................. ADDON ............................................................................................................................................ ADD_ADMIN.................................................................................................................................. AUTOGEN ....................................................................................................................................... CHECK ............................................................................................................................................. check irpcd ................................................................................................................................ check network [port] ............................................................................................................... check irpcdstat.......................................................................................................................... check tcpip ................................................................................................................................ check server............................................................................................................................... check license ............................................................................................................................. check datasource ...................................................................................................................... CODEPAGE ..................................................................................................................................... DELETE ............................................................................................................................................ Deleting Data Source Objects ................................................................................................. EDIT ................................................................................................................................................ EXECUTE ....................................................................................................................................... EXECUTE Overview ............................................................................................................. NavSQL Environment........................................................................................................... 37-1 37-1 37-2 37-2 37-2 37-3 37-4 37-4 37-5 37-5 37-5 37-6 37-6 37-6 37-7 37-7 37-8 37-8 37-8 37-9 37-9 37-10 37-13 37-13 37-13

xxii

Executing SQL Statements.................................................................................................... NavSQL Commands.............................................................................................................. EXPORT.......................................................................................................................................... GEN_ARRAY_TABLES................................................................................................................ IMPORT.......................................................................................................................................... IRPCDCMD.................................................................................................................................... LOCAL_COPY............................................................................................................................... PASSWORD ................................................................................................................................... PROTOGEN ................................................................................................................................... REGISTER ...................................................................................................................................... SERVICE ......................................................................................................................................... SVC.................................................................................................................................................. TEST ................................................................................................................................................ UPDATE ......................................................................................................................................... Removing Metadata Statistics .............................................................................................. UPD_DS.......................................................................................................................................... UPD_SEC........................................................................................................................................ VERSION........................................................................................................................................ VERSION_HISTORY .................................................................................................................... VIEW ............................................................................................................................................... XML................................................................................................................................................. XML Samples..........................................................................................................................

37-14 37-15 37-16 37-19 37-20 37-21 37-21 37-22 37-22 37-22 37-22 37-23 37-23 37-23 37-23 37-25 37-25 37-26 37-26 37-26 37-28 37-29

38

Using Attunity BASIC Import Utility


Overview................................................................................................................................................. 38-1 Using the Basic Import Utility ............................................................................................................ 38-1

Part VIII 39

Data Source Reference

Adabas C Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Functionality .......................................................................................................................................... Optimizing Adabas Queries .......................................................................................................... Statistical Information ............................................................................................................. Subdescriptors and Superdescriptors with Subfields................................................................ SQL That Will Not Use the S1 Descriptor ............................................................................ SQL That Will Use the S1 Descriptor .................................................................................... Phonetic-descriptors and Hyper-descriptors.............................................................................. Descriptors on MU and PE fields.................................................................................................. Null Suppression Handling........................................................................................................... Array Handling ............................................................................................................................... Logical Tables .................................................................................................................................. Locking Support .............................................................................................................................. 39-1 39-2 39-2 39-2 39-2 39-3 39-3 39-3 39-4 39-4 39-4 39-5 39-5 39-6 39-7 39-8

xxiii

Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Configuration Properties ................................................................................................................... Platform-specific Information .......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables........................................................................................ Relinking to the Adabas Driver on UNIX Platforms ........................................................ Accessing 64 Bit Adabas ....................................................................................................... z/OS Platforms.............................................................................................................................. Specifying the Adabas SVC .................................................................................................. Configuring AIS to Run in multiClient Mode ................................................................... Defining an Adabas Data Source..................................................................................................... Defining the Adabas Data Source Connection ......................................................................... Configuring the Adabas Data Source Properties ..................................................................... Setting Up Adabas Data Source Metadata (Using the Import Manager) ................................. Selecting the DDM Declaration files .......................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Setting Up Adabas Data Source Metadata (Traditional Method) .............................................. Importing Attunity Metadata from DDM Files ........................................................................ Exporting Predict Metadata into Adabas ADD........................................................................ Testing the Adabas Data Source.......................................................................................................

39-8 39-8 39-8 39-10 39-12 39-12 39-12 39-13 39-13 39-13 39-13 39-13 39-17 39-17 39-18 39-19 39-19 39-22 39-23 39-23 39-24 39-25 39-31 39-33 39-34 39-34 39-35 39-35

40

DB2 Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Update Semantics .................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. z/OS .................................................................................................................................................. OS/400 .............................................................................................................................................. UNIX and Windows ....................................................................................................................... Configuring Transaction Support ......................................................................................... Configuring the Shared Library Environment Variable .................................................... Security.................................................................................................................................................... DB2 Data Types ..................................................................................................................................... Defining a DB2 Data Source ............................................................................................................... 40-1 40-1 40-1 40-1 40-2 40-2 40-2 40-4 40-4 40-5 40-6 40-6 40-6 40-7 40-7 40-7 40-8

xxiv

z/OS .................................................................................................................................................. Defining an ODBCINI file ...................................................................................................... Defining the Data Source Connection................................................................................... Configuring the Data Source................................................................................................ OS/400 ............................................................................................................................................ Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................ UNIX and Windows ..................................................................................................................... Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................

40-8 40-8 40-9 40-10 40-11 40-11 40-11 40-13 40-13 40-13

41

CISAM /DISAM Data Source


Overview................................................................................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining a CISAM/DISAM Data Source ......................................................................................... Defining the CISAM/DISAM Data Source Connection............................................................ Configuring the CISAM/DISAM Data Source ........................................................................... Setting Up the CISAM/DISAM Data Source Metadata ................................................................ 41-1 41-1 41-1 41-1 41-2 41-2 41-3 41-3 41-4 41-5

42

DBMS Data Source (OpenVMS Only)


Overview................................................................................................................................................. Prerequisites..................................................................................................................................... Functionality .......................................................................................................................................... Locking ............................................................................................................................................. Update Semantics .................................................................................................................... Configuration Properties ..................................................................................................................... Data Types .............................................................................................................................................. Transaction Support.............................................................................................................................. Platform-specific Information ............................................................................................................ Database Model Mapping Requirements .................................................................................... Virtual Columns .............................................................................................................................. Using Virtual Columns ........................................................................................................... Virtual Columns and Indexes .............................................................................................. Virtual Column Categories................................................................................................... Accessing DBMS Data .................................................................................................................. DBMS Error Codes........................................................................................................................ Defining the DBMS Data Source..................................................................................................... Defining the DBMS Data Source Connection ........................................................................... Configuring the DBMS Data Source Properties ....................................................................... Setting Up the DBMS Data Source Metadata................................................................................ 42-1 42-2 42-2 42-2 42-2 42-3 42-3 42-4 42-4 42-5 42-5 42-5 42-12 42-14 42-14 42-15 42-21 42-21 42-22 42-23

xxv

43

Enscribe Data Source (HP NonStop Only)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .............................................................................................. Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Enscribe Data Types.............................................................................................................................. Defining the Enscribe Data Source.................................................................................................... Defining the Enscribe Data Source Connection.......................................................................... Configuring the Enscribe Data Source......................................................................................... Setting up the Enscribe Data Source Metadata ............................................................................... Importing Metadata from COBOL ............................................................................................... Starting the Import Process .................................................................................................... Importing Metadata Using the ADDIMP Utility ....................................................................... Importing Metadata Using the TALIMP Utility....................................................................... Maintaining Metadata .................................................................................................................. Testing the Enscribe Data Source..................................................................................................... Sample Log File Explained .......................................................................................................... 43-1 43-1 43-2 43-2 43-3 43-3 43-4 43-4 43-4 43-4 43-5 43-6 43-6 43-7 43-8 43-10 43-13 43-13 43-15

44

Flat File Data Source


Configuration Properties ..................................................................................................................... Defining a Flat File Data Source ........................................................................................................ Defining the Flat File Data Source Connection........................................................................... Configuring the Flat File Data Source.......................................................................................... Setting Up the Flat File Data Source Metadata ............................................................................... Importing Attunity Metadata from COBOL ............................................................................... Maintaining Attunity Metadata .................................................................................................. 44-1 44-1 44-1 44-2 44-3 44-3 44-11

45

IMS/DB Data Sources


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Environmental Prerequisites ......................................................................................................... IMS-DLI Prerequisites ............................................................................................................. IMS-DBCTL Prerequisites....................................................................................................... IMS-DBDC Prerequisites ........................................................................................................ Limitations ....................................................................................................................................... General IMS Limitations ......................................................................................................... Limitations Specific to IMS/DLI............................................................................................ Limitations Specific to IMS-DBCTL ...................................................................................... Limitations Specific to IMS/DBDC ....................................................................................... Functionality .......................................................................................................................................... Hierarchical Modelling................................................................................................................... Constructing DLI Commands from SQL Requests .................................................................... 45-1 45-1 45-2 45-2 45-2 45-2 45-2 45-3 45-3 45-4 45-4 45-4 45-4 45-4 45-5

xxvi

Selecting a PCB......................................................................................................................... DLI Samples.............................................................................................................................. Configuration Properties ..................................................................................................................... IMS/DB DLI Configuration Properties ....................................................................................... IMS/DB DBCTL Configuration Properties ................................................................................. IMS/DB DBDC Configuration Properties................................................................................... Configuring Advanced Data Source Properties ......................................................................... Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Hospital Database Example ................................................................................................................ Defining the IMS/DB DLI Data Source ......................................................................................... Defining the IMS/DB DLI Data Source Connection................................................................ Configuring the IMS/DB DLI Data Source............................................................................... Setting Up the Daemon Workspace ........................................................................................... Defining the IMS/DB DBCTL Data Source .................................................................................. Defining the IMS/DB DBCTL Data Source Connection ......................................................... Configuring the IMS/DB DBCTL Data Source......................................................................... Accessing IMS/DB Data under CICS......................................................................................... Defining the IMS/DB DBDC Data Source .................................................................................... Defining the IMS/DB DBDC Data Source Connection ........................................................... Configuring the IMS/DB DBDC Data Source .......................................................................... Accessing IMS/DB Data under IMS/TM.................................................................................. Setting Up IMS/DB Metadata ........................................................................................................... Selecting the Input Files ............................................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Matching DBD to COBOL............................................................................................................ Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata .....................................................................................................................

45-6 45-6 45-7 45-7 45-7 45-8 45-8 45-8 45-9 45-9 45-11 45-11 45-12 45-12 45-13 45-13 45-14 45-15 45-16 45-16 45-16 45-17 45-18 45-19 45-20 45-22 45-22 45-23 45-23 45-25 45-31 45-33

46

Informix Data Source


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .............................................................................................. SQL Capability ...................................................................................................................................... Stored Procedures ........................................................................................................................... Limitations ................................................................................................................................ Informix CLOB/BLOBs.................................................................................................................. Using Passthru Queries .................................................................................................................. Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Statistics ............................................................................................................................................ Owner Support ................................................................................................................................ Transaction Support.............................................................................................................................. 46-1 46-1 46-2 46-2 46-4 46-4 46-5 46-5 46-5 46-5 46-6 46-6 46-6

xxvii

Locking Levels ................................................................................................................................. Isolation Levels ................................................................................................................................ Security.................................................................................................................................................... Data Types .............................................................................................................................................. Defining an Informix Data Source .................................................................................................... Defining the Informix Data Source Connection ......................................................................... Configuring the Informix Data Source Properties ................................................................... Testing the Informix Data Source .................................................................................................... Sample Log File Explained ..........................................................................................................

46-6 46-7 46-8 46-8 46-9 46-9 46-10 46-11 46-13

47

Ingres II (Open Ingres) Data Source


Supported Versions and Platforms .................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... BLOBs................................................................................................................................................ Passthru Queries ............................................................................................................................. Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Defining the Ingres II Data Source.................................................................................................... Defining the Ingres II Data Source Connection .......................................................................... Configuring the Ingres II Data Source Properties ...................................................................... 47-1 47-1 47-1 47-1 47-2 47-2 47-3 47-4 47-5 47-6 47-6 47-6 47-7

48

ODBC Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels ................................................................................................................................ SQL Capabilities ................................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Defining the ODBC Data Source ....................................................................................................... Defining the ODBC Data Source Connection on a Windows Platform .................................. Defining the ODBC Data Source Connection on a non-Windows Platform .......................... Configuring the ODBC Data Source ............................................................................................ Testing the ODBC Data Source .......................................................................................................... Logging ........................................................................................................................................... 48-1 48-1 48-1 48-2 48-2 48-2 48-2 48-3 48-4 48-4 48-4 48-4 48-6 48-6 48-6 48-7 48-8 48-9 48-11

xxviii

49

OLEDB-FS (Flat File System) Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Provider Requirements............................................................................................................... Functionality .......................................................................................................................................... Isolation Levels ................................................................................................................................ Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Configuration Properties ..................................................................................................................... Defining the Data Source .................................................................................................................... Defining the OLEDB-FS Data Source Connection...................................................................... Configuring the OLEDB-FS Data Source Properties.................................................................. 49-1 49-1 49-1 49-2 49-2 49-3 49-3 49-3 49-4 49-4 49-4

50

OLEDB-SQL (Relational) Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Provider Requirements............................................................................................................... Functionality .......................................................................................................................................... Isolation Levels ................................................................................................................................ Stored Procedures ........................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Configuration Properties ..................................................................................................................... Defining the Data Source .................................................................................................................... Defining the OLEDB-SQL Data Source Connection .................................................................. Configuring the OLEDB-SQLData Source Properties ............................................................... 50-1 50-1 50-1 50-2 50-3 50-3 50-3 50-3 50-4 50-4 50-4 50-5

51

Oracle Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Consistency ............................................................................................................................... Attunity Connect Treatment of Locking .............................................................................. Attunity Connect Treatment of Isolation Levels ................................................................. BLOBs ............................................................................................................................................... Passthru Queries ............................................................................................................................. SQL Capabilities ................................................................................................................................... Using Oracle Hints in the SQL ...................................................................................................... Attunity Hints .......................................................................................................................... Oracle Hints .............................................................................................................................. Configuration Properties ..................................................................................................................... Metadata................................................................................................................................................ Transaction Support............................................................................................................................ 51-1 51-1 51-1 51-2 51-2 51-2 51-3 51-3 51-4 51-4 51-4 51-6 51-7 51-7 51-8 51-9 51-10 51-11

xxix

Security.................................................................................................................................................. Data Types ............................................................................................................................................ Platform-Specific Information.......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables on UNIX Platforms .................................................... Linking to Oracle Libraries on UNIX Platforms................................................................ OpenVMS Platform....................................................................................................................... Verifying Environment Variables on OpenVMS Platforms ............................................ Linking to Oracle Libraries on OpenVMS Platforms ....................................................... Defining the Oracle Data Source ..................................................................................................... Defining the Oracle Data Source Connection ........................................................................... Configuring the Oracle Data Source Properties ....................................................................... Configuring Table and Column Names to be Case Sensitive................................................. Checking Oracle Environment Variables .................................................................................. Testing the Oracle Data Source......................................................................................................... Sample Log File .............................................................................................................................

51-11 51-11 51-13 51-13 51-13 51-13 51-13 51-13 51-14 51-14 51-14 51-15 51-16 51-17 51-17 51-18

52

Oracle RDB Data Source (OpenVMS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... BLOBs................................................................................................................................................ Passthru Queries ............................................................................................................................. SQL Capabilities ................................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Statistics ............................................................................................................................................ Transaction Support.............................................................................................................................. Installing XA-related Shareable Libraries.................................................................................... Security.................................................................................................................................................... Oracle RDB Data Types........................................................................................................................ Defining the Oracle RDB Data Source ........................................................................................... Defining the Oracle RDB Data Source Connection .................................................................. Configuring the Oracle RDB Data Source Properties .............................................................. Testing the Oracle RDB Data Source............................................................................................... Sample Log File ............................................................................................................................. 52-1 52-1 52-1 52-2 52-2 52-2 52-4 52-4 52-4 52-6 52-8 52-8 52-8 52-9 52-9 52-9 52-10 52-11 52-11 52-12 52-14

53

RMS Data Source (OpenVMS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. 53-1 53-1 53-1 53-1 53-2 53-2

xxx

Data Types .............................................................................................................................................. Defining the RMS Data Source .......................................................................................................... Defining the RMS Data Source Connection ................................................................................ Configuring the RMS Data Source ............................................................................................... Setting Up the RMS Data Source Metadata with the Import Manager...................................... Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Importing Attunity Metadata Using the RMS_CDD Import Utility ........................................

53-3 53-3 53-3 53-4 53-5 53-6 53-8 53-9 53-10 53-10 53-12 53-17 53-19 53-20

54

SQL Server Data Source (Windows Only)


Overview................................................................................................................................................. Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining an SQL Server Data Source................................................................................................ Defining the SQL Server Data Source Connection..................................................................... Configuring the SQL Server Data Source Properties................................................................. 54-1 54-1 54-2 54-3 54-3 54-4 54-5 54-5

55

SQL/MP Data Source (HP NonStop Only)


Overview................................................................................................................................................. Limitations ....................................................................................................................................... Functionality .......................................................................................................................................... Mapping SQL/MP Table Names.................................................................................................. SQL/MP Primary Keys .................................................................................................................. Partitioned Tables ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining the SQL/MP Data Source ................................................................................................... Defining the SQL/MP Data Source Connection......................................................................... Configuring the SQL/MP Data Source Properties..................................................................... 55-1 55-1 55-2 55-2 55-2 55-2 55-3 55-4 55-4 55-5 55-6 55-6 55-7

56

Sybase Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... 56-1 56-1 56-1 56-1 56-2

xxxi

Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Verifying Environment Variables on UNIX Platforms.............................................................. Defining the Sybase Data Source ...................................................................................................... Defining the Sybase Data Source Connection............................................................................. Configuring the Sybase Data Source Properties......................................................................... Checking Sybase Environment Variables....................................................................................

56-2 56-3 56-3 56-5 56-5 56-5 56-5 56-6 56-7

57

Text Delimited File Data Source


Overview................................................................................................................................................. Features............................................................................................................................................. Limitations ....................................................................................................................................... Configuration Properties .................................................................................................................... Defining the Text Delimited File Data Source ................................................................................ Defining the Text Delimited File Data Source Connection ....................................................... Configuring the Text Delimited File Data Source ...................................................................... Setting Up the Text Delimited Data Source Metadata ................................................................... Importing Attunity Metadata from COBOL ............................................................................... Maintaining Attunity Metadata .................................................................................................... 57-1 57-1 57-1 57-1 57-2 57-2 57-2 57-3 57-4 57-9

58

Virtual Data Source


Overview................................................................................................................................................. Configuration Properties ..................................................................................................................... Platform-specific Configuration Properties ............................................................................... HP NonStop Platforms............................................................................................................ z/OS Platforms......................................................................................................................... OpenVMS Platforms................................................................................................................ Defining the Virtual Data Source ...................................................................................................... Defining the Virtual Data Source Connection ............................................................................ Configuring the Virtual Data Source ........................................................................................... 58-1 58-1 58-2 58-2 58-3 58-3 58-3 58-3 58-4

59

VSAM Data Source (z/OS)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Environmental Prerequisites ......................................................................................................... Configuration Properties ..................................................................................................................... VSAM Data Source Parameters .................................................................................................... VSAM (CICS) Data Source Parameters........................................................................................ Metadata.................................................................................................................................................. VSAM Metadata Requirements .................................................................................................... VSAM (CICS) Metadata Requirements........................................................................................ Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Data Types .............................................................................................................................................. 59-1 59-2 59-2 59-2 59-2 59-3 59-4 59-4 59-4 59-5 59-6 59-6

xxxii

Defining a VSAM Data Source .......................................................................................................... Defining the VSAM Data Source Connection ............................................................................. Defining the VSAM (CICS) Data Source Connection ................................................................ Configuring the VSAM Data Source Properties ......................................................................... Configuring the VSAM (CICS) Data Source Properties .......................................................... Setting Up the VSAM Data Source Metadata................................................................................ Selecting the COBOL files ............................................................................................................ Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Create VSAM Indexes................................................................................................................... Assigning File Names................................................................................................................... Assigning Index File Names........................................................................................................ Metadata Model Selection............................................................................................................ Importing the Metadata ...............................................................................................................

59-6 59-6 59-7 59-9 59-10 59-11 59-12 59-14 59-16 59-16 59-17 59-18 59-24 59-25 59-26 59-27 59-29

Part IX 60

Procedure Data Source Reference

Natural/CICS Procedure Data Source (z/OS)


Overview................................................................................................................................................. Environmental Prerequisites ......................................................................................................... Supported Platforms and Versions .................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Specifying the Program to Execute............................................................................................... Syntax ........................................................................................................................................ Specifying Input and Output Parameters.................................................................................... Syntax ........................................................................................................................................ Security.................................................................................................................................................... Defining the Natural/CICS Procedure Data Source ....................................................................... Defining the Natural/CICS Procedure Data Source Connection ............................................ Configuring the Natural/CICS Data Source............................................................................... Writing a Natural Remote Procedure Call ........................................................................................ The Subprogram General Structure ........................................................................................... Maintaining the CICS Environment for the Natural Agent ....................................................... 60-1 60-2 60-2 60-2 60-2 60-3 60-3 60-3 60-3 60-5 60-5 60-5 60-6 60-7 60-11 60-12

61

Procedure Data Source (Application Connector)


Overview................................................................................................................................................. Introduction ..................................................................................................................................... Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Overview .......................................................................................................................................... 61-1 61-1 61-2 61-2 61-2 61-2 61-3

xxxiii

Parameter Descriptions .................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Platform-specific Information ............................................................................................................ Windows Platforms and AIS Procedures (ADO Considerations) ........................................... HP NonStop Platforms and Attunity Connect Procedures ...................................................... Load Modules and DLLs on MVS ................................................................................................ Descriptors on OpenVMS ............................................................................................................ OS/400 Issues ................................................................................................................................ Defining the Procedure Data Source............................................................................................... Defining the Procedure Data Source Connection..................................................................... Configuring the Procedure Data Source.................................................................................... Setting Up Procedure Data Source Metadata................................................................................. Defining Return Values................................................................................................................ Defining Input and Output Arguments .................................................................................... Testing the Procedure Data Source .................................................................................................. Executing a Procedure ........................................................................................................................

61-3 61-7 61-7 61-7 61-7 61-7 61-8 61-9 61-10 61-10 61-10 61-10 61-10 61-11 61-12 61-13 61-16 61-16

62

CICS Procedure Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Environmental Prerequisites ......................................................................................................... Limitations ....................................................................................................................................... Design Considerations ......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Using Attunity Connect with Two-phase Commit .................................................................... Security.................................................................................................................................................... Data Types .............................................................................................................................................. Defining the CICS Procedure Data Source ...................................................................................... Defining the CICS Procedure Data Source Connection............................................................. Configuring the CICS Procedure Data Source............................................................................ Setting-up the CICS Procedure Data Source Metadata ................................................................. Importing Metadata from COBOL ............................................................................................... Editing the XML in the Source Code............................................................................................... Editing the <procedure> Statement ........................................................................................... Editing the <field> Statement ..................................................................................................... Editing the <parameters> Statement.......................................................................................... Sample ADD Metadata................................................................................................................. 62-1 62-1 62-2 62-2 62-2 62-2 62-3 62-4 62-4 62-4 62-5 62-5 62-6 62-6 62-7 62-8 62-8 62-13 62-13 62-13 62-14 62-15

Part X

Adapters Reference

xxxiv

63

CICS Application Adapter (z/OS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Environmental Prerequisites ......................................................................................................... Limitations ....................................................................................................................................... Transaction Support.............................................................................................................................. Configuration Properties ..................................................................................................................... Defining the CICS Application Adapter .......................................................................................... Defining the CICS Application Adapter Connection ................................................................ Configuring the CICS Application Adapter ............................................................................... Setting Up CICS Application Metadata ....................................................................................... Importing Attunity Metadata from COBOL ............................................................................... Refining CICS Application Adapter Metadata ......................................................................... Testing CICS Application Adapter Interactions ........................................................................... 63-1 63-1 63-1 63-2 63-2 63-2 63-3 63-3 63-4 63-4 63-5 63-5 63-12 63-13

64

COM Adapter (Windows Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Types .............................................................................................................................................. Registering the COM Application ..................................................................................................... Defining the COM Application Adapter.......................................................................................... Setting Up COM Application Interactions ...................................................................................... COM Adapter Attributes ............................................................................................................... Record Level Attributes .......................................................................................................... Field Level Attributes.............................................................................................................. Defining COM Data Types.................................................................................................................. 64-1 64-1 64-1 64-3 64-3 64-3 64-4 64-4 64-5 64-5

65

IMS/TM Adapter (z/OS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Transaction Support.............................................................................................................................. Configuration Parameters ................................................................................................................... Defining the IMS/TM Application Adapter .................................................................................... Defining the IMS/TM Application Adapter Connection.......................................................... Configuring the IMS/TM Application Adapter......................................................................... Setting Up the IMS/TM Application Metadata............................................................................... Importing Attunity Metadata from COBOL ............................................................................... 65-1 65-1 65-1 65-2 65-3 65-3 65-3 65-4 65-5

66

Legacy Plug Application Adapter


Overview................................................................................................................................................. Configuration Parameters.................................................................................................................... Defining the Legacy Plug Application Adapter.............................................................................. Defining the Legacy Plug Application Adapter Connection.................................................... Configuring the Legacy Plug Application Adapter................................................................... 66-1 66-1 66-2 66-2 66-2

xxxv

Setting Up Legacy Application Metadata......................................................................................... Importing Attunity Metadata from PCML Files ........................................................................ Defining Interactions and Records ............................................................................................... Defining Interaction Properties.............................................................................................. Defining Schema Records ....................................................................................................... Configuring a Trigger for the Legacy Plug Adapter................................................................

66-3 66-3 66-7 66-7 66-9 66-11

67

Pathway Application Adapter (HP NonStop Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Transaction Support.............................................................................................................................. Pathway Adapter Configuration Parameters ................................................................................... Defining the Pathway Application Adapter .................................................................................... Defining the Pathway Application Adapter Connection.......................................................... Configuring the Pathway Application Adapter ......................................................................... Setting Up Pathway Application Metadata...................................................................................... Importing Attunity Metadata from COBOL ............................................................................... 67-1 67-1 67-1 67-2 67-2 67-3 67-3 67-4 67-5

68

Tuxedo Application Adapter (UNIX and Windows Only)


Overview of the Tuxedo Application Adapter ................................................................................ Supported Versions and Platforms .............................................................................................. Feature Highlights .......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Checking the Tuxedo Environment Variables ................................................................................. Defining the Tuxedo Adapters............................................................................................................ Defining the Tuxedo Application Adapter ................................................................................. Defining the Tuxedo Queue Adapter........................................................................................... Setting Up Tuxedo Application Adapter Interactions.................................................................... Importing Metadata Using a BEA Jolt Bulk Loader File ........................................................... Importing Metadata Using FML/VIEW Files........................................................................... Setting up Tuxedo Queue Adapter Interactions............................................................................ Defining the Tuxedo Queue Unstructured Records ................................................................ Testing Tuxedo Application Interactions........................................................................................ 68-1 68-1 68-1 68-2 68-2 68-2 68-3 68-3 68-3 68-3 68-4 68-4 68-5 68-5 68-10 68-18 68-22 68-25

Part XI 69

Non-Application Adapters Reference

Database Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. 69-1 69-1 69-1 69-2 69-2

xxxvi

Security.................................................................................................................................................... SQL Interaction Types .......................................................................................................................... Database Query Interaction ........................................................................................................... Database Modification Interaction ............................................................................................... Stored Procedure Call Interaction................................................................................................. Transaction Support.............................................................................................................................. Interaction Parameters.......................................................................................................................... Defining the Database Adapter.......................................................................................................... Defining the Database Adapter Connection ............................................................................... Configuring the Database Adapter .............................................................................................. Configuring Database Adapter Interactions.................................................................................... Automatically Creating Interactions ............................................................................................ Manually Creating Interactions .................................................................................................... Specifying Parameters ........................................................................................................... Testing Database Adapter Interactions ........................................................................................... Creating SQL Queries ........................................................................................................................

69-2 69-2 69-2 69-3 69-3 69-4 69-4 69-4 69-5 69-5 69-5 69-6 69-8 69-20 69-21 69-21

70

Query Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Features Highlights......................................................................................................................... Metadata.................................................................................................................................................. Security.................................................................................................................................................... Transaction Support.............................................................................................................................. Predefined Interactions........................................................................................................................ callProcedure Interaction ............................................................................................................... Input Record ............................................................................................................................. Output Record .......................................................................................................................... ddl Interaction ................................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... getSchema Interaction .................................................................................................................... Input Record ............................................................................................................................. query Interaction ............................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... Output Data Formats............................................................................................................... setErrorAction Interaction ........................................................................................................... Input Record ........................................................................................................................... update Interaction ......................................................................................................................... Input Record ........................................................................................................................... Output Record ........................................................................................................................ Interactions for Internal Use ........................................................................................................ Using the Query Adapter................................................................................................................... 70-1 70-1 70-1 70-1 70-2 70-2 70-2 70-2 70-2 70-3 70-4 70-4 70-5 70-6 70-6 70-8 70-8 70-9 70-9 70-13 70-13 70-13 70-13 70-14 70-15 70-16

xxxvii

71

Managing the Execution of Queries over Large Tables


Overview of Query Governing........................................................................................................... 71-1 Configuring Query Governing........................................................................................................... 71-1

Part XII 72

CDC Agents Reference

Adabas CDC on z/OS Platforms


Overview................................................................................................................................................. Functionality .......................................................................................................................................... The Tracking File............................................................................................................................. Configuration Properties ..................................................................................................................... Data Source Properties ................................................................................................................... CDC Logger Properties .................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Data Types .............................................................................................................................................. Configuring the Adabas CDC............................................................................................................. Setting up the ATTSRVR Started Task......................................................................................... Setting up the Tracking File........................................................................................................... Adding the Tracking File Usage Step to the UE2 Procedure.................................................... Setting up the Adabas Agent in Attunity Studio............................................................................ Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 72-1 72-1 72-2 72-2 72-2 72-2 72-2 72-3 72-3 72-3 72-4 72-4 72-4 72-4 72-5 72-5 72-6 72-6

73

Adabas CDC on UNIX Platforms


Overview................................................................................................................................................. Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Configuring the Adabas CDC............................................................................................................. Identifying the Adabas CDC in the Adabas System.................................................................. Defining Adabas CDC in Attunity Studio....................................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 73-1 73-1 73-2 73-2 73-2 73-3 73-3 73-3 73-3 73-3 73-3 73-4 73-5

74

Adabas CDC for OpenVMS


Overview................................................................................................................................................. Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... 74-1 74-1 74-1 74-2

xxxviii

Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Configuring the Adabas CDC............................................................................................................. Identifying the Adabas CDC in the Adabas System.................................................................. Defining Adabas CDC in Attunity Studio....................................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

74-2 74-2 74-2 74-3 74-3 74-3 74-3 74-3 74-4

75

DB2 CDC (z/OS)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Supported Versions and Platforms .............................................................................................. Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Data Types .............................................................................................................................................. Configuring the DB2 Tables for CDC ............................................................................................... Configuring the ATTSRVR Started Task.......................................................................................... Setting up the DB2 Agent in Attunity Studio ................................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 75-1 75-1 75-2 75-2 75-2 75-2 75-3 75-3 75-3 75-3 75-3 75-3 75-4 75-4 75-5

76

DB2 CDC (OS/400 Platforms)


DB2 CDC Agent Overview.................................................................................................................. Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Setting-up the DB2 Journal on OS/400.............................................................................................. Setting-up the DB2 for OS/400 Agent in Attunity Studio............................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 76-1 76-1 76-1 76-1 76-2 76-2 76-3 76-3 76-3 76-3 76-6 76-6 76-7

77

Enscribe CDC (HP NonStop Platforms)


Overview................................................................................................................................................. 77-1 Functionality .......................................................................................................................................... 77-1 Configuration Properties ..................................................................................................................... 77-1

xxxix

Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform-specific Information ............................................................................................................ Setting Up Enscribe to use the Attunity Enscribe CDC Agent .................................................... Adding the Enscribe Agent to Attunity Studio............................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

77-1 77-2 77-2 77-2 77-2 77-3 77-3 77-3 77-4

78

IMS/DB CDC on z/OS Platforms


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Platforms and Versions .............................................................................................. Configuration Properties ..................................................................................................................... CDC Logger Properties .................................................................................................................. CDC$PARM Properties.................................................................................................................. Agent Properties.............................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Configuring the DFSFLGX0 Exit........................................................................................................ MVS Logstream Creation............................................................................................................... Managing the MVS Logstream .............................................................................................. Creating and Configuring the CDC$PARM Data Set................................................................ Update the IMS Environment ....................................................................................................... Adjust the DBD for the Relevant Databases ............................................................................... Setting-up the IMS/DB CDC Agent in Attunity Studio................................................................ Configuring the CDC Service........................................................................................................ Setting the envlmsBatch Property ................................................................................................ Working with Metadata ................................................................................................................. Troubleshooting..................................................................................................................................... 78-1 78-1 78-2 78-2 78-2 78-2 78-2 78-3 78-3 78-3 78-4 78-4 78-4 78-4 78-4 78-5 78-5 78-5 78-5 78-7 78-8 78-9

79

Microsoft SQL Server CDC


Overview................................................................................................................................................. Microsoft SQL Server CDC Solution ............................................................................................ MS SQL Server.......................................................................................................................... TLOG Miner.............................................................................................................................. Transient Storage ..................................................................................................................... Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. 79-1 79-2 79-3 79-3 79-3 79-4 79-4 79-5 79-5 79-5 79-6 79-6

xl

User Defined Data Types (UDT)................................................................................................... Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Setting up the SQL Server CDC in Attunity Studio ...................................................................... Enabling MS SQL Replication ......................................................................................................... MS SQL Server 2000 Replication................................................................................................. MS SQL Server 2005 Replication................................................................................................. Configuring Security Properties ...................................................................................................... Setting up Log On Information ........................................................................................................ Setting up the Database ..................................................................................................................... MS SQL Server 2000 Settings....................................................................................................... MS SQL Server 2005 Settings....................................................................................................... Setting Up the TLOG Miner (LGR) ................................................................................................. Call the LGR Service Interface..................................................................................................... Configuring the Template Input File ......................................................................................... Registering the TLOG Miner (LGR) Service.............................................................................. Setting the Recovery Policy ......................................................................................................... Testing Attunitys Microsoft SQL Server CDC Solution............................................................. Handling Metadata Changes ............................................................................................................ Environment Verification .................................................................................................................. Verify the MS SQL Server Version ............................................................................................. Ensure that the Service is Registered ......................................................................................... Verify that the LGR Service is Running ..................................................................................... Viewing the Service Greetings .................................................................................................... Check the Output Files .................................................................................................................

79-7 79-8 79-8 79-8 79-11 79-11 79-11 79-12 79-12 79-13 79-13 79-14 79-14 79-14 79-15 79-18 79-18 79-19 79-19 79-20 79-20 79-21 79-22 79-23 79-23

80

Oracle CDC (on UNIX and Windows Platforms)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Setting-up the Oracle REDO Log....................................................................................................... Testing the Database Logging Settings............................................................................................. Changing the Operation Mode for Metadata Changes.................................................................. Setting up the Oracle CDC in Attunity Studio ............................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ Troubleshooting..................................................................................................................................... 80-1 80-1 80-2 80-2 80-2 80-3 80-3 80-4 80-4 80-5 80-5 80-5 80-6 80-7 80-8

81

Query-Based CDC Agent


Overview................................................................................................................................................. 81-1 Setting-up Query-based CDC Agent................................................................................................. 81-1

xli

Changing a Query-based CDC Agent Definition........................................................................... 81-6

82

SQL/MP CDC on HP NonStop


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Defining the SQL/MP Agent .............................................................................................................. Setting up the SQL/MP Agent in Attunity Studio.......................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 82-1 82-1 82-1 82-1 82-2 82-2 82-2 82-2 82-2 82-3 82-3 82-4

83

VSAM Under CICS CDC (on z/OS)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Configuration Properties ..................................................................................................................... Data Source Properties ................................................................................................................... CDC Service Properties .................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Managing the CICS User Journal....................................................................................................... Setting up the CICS User Journal for VSAM............................................................................... Print out the CICS User Journal Content..................................................................................... Setting up the VSAM CICS Agent in Attunity Studio .................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 83-1 83-1 83-1 83-1 83-2 83-2 83-3 83-3 83-4 83-4 83-4 83-5 83-6 83-6 83-7

84

VSAM Batch CDC (z/OS Platforms)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Configuration Properties ..................................................................................................................... CDC Service Properties .................................................................................................................. CDC$PARM Properties.................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Single Program Transaction Manager.......................................................................................... Logical Transaction Manager ........................................................................................................ Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ 84-1 84-1 84-2 84-2 84-2 84-2 84-3 84-3 84-4 84-4 84-4 84-5

xlii

Configuring the Logger........................................................................................................................ Creating the Logstream.................................................................................................................. Managing the MVS Logstream .............................................................................................. Creating the CDC$PARM Data Set .............................................................................................. Updating Jobs and Scripts ............................................................................................................. Updating Jobs for Activating CDC JRNAD ......................................................................... Updating Jobs for Using the Logical Transaction Manager .............................................. Update the REXX Scripts ........................................................................................................ Setting up the VSAM Batch Agent in Attunity Studio.................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

84-5 84-5 84-5 84-6 84-6 84-6 84-6 84-7 84-7 84-7 84-7

Part XIII 85

Interface Reference

C and COBOL 3GL Client Interfaces


Overview of the C and COBOL 3GL APIs to Applications .......................................................... Using the 3GL API to Invoke Application Adapters ................................................................. Using the API with C Programs ............................................................................................ Using the API with COBOL Programs ................................................................................. Supported Interfaces....................................................................................................................... APIs and Functions ............................................................................................................................... Connection APIs.............................................................................................................................. The Connect Function ............................................................................................................. Identifying the Adapter Schema............................................................................................ The Clean Connection Function ............................................................................................ The Disconnect Function ........................................................................................................ The Retry Connection Function ............................................................................................. Transaction APIs ............................................................................................................................. Set Autocommit Function ....................................................................................................... Transaction Commit Function ............................................................................................... Transaction Rollback Function............................................................................................... Execution APIs................................................................................................................................. Execute Function .................................................................................................................... Execute Batch Function ......................................................................................................... Setting Environment Parameters......................................................................................... Get Adapter Schema Function .................................................................................................... Get Event Function ....................................................................................................................... Ping Function................................................................................................................................. Get Error Function ........................................................................................................................ Using APIs to Invoke Application Adapters - Examples ............................................................ C Program Example...................................................................................................................... COBOL Program Example........................................................................................................... CICS as a Client Invoking an Application Adapter (z/OS Only) ........................................... Configuring the IBM z/OS Machine.......................................................................................... Using a CICS Transaction to Invoke an Application Adapter ............................................... COBOL Data Buffer ............................................................................................................... 85-1 85-1 85-1 85-2 85-2 85-2 85-2 85-3 85-5 85-6 85-6 85-6 85-7 85-7 85-8 85-9 85-9 85-10 85-11 85-11 85-12 85-12 85-14 85-15 85-16 85-16 85-19 85-21 85-21 85-22 85-24

xliii

Calling the Transaction ................................................................................................................ Transaction Output ....................................................................................................................... CICS Connection Pooling under CICS ........................................................................................... Using Connection Pooling under CICS ..................................................................................... CICS Connection Pool Flow ........................................................................................................ Control Operations Flow ...................................................................................................... 3GL Operations Flow ............................................................................................................ ATTCALL Program Interface...................................................................................................... COMMAREA.......................................................................................................................... Control Protocol ..................................................................................................................... 3GL Protocol ........................................................................................................................... ATTCNTRL Program ................................................................................................................... ATTCNTRL Parameters........................................................................................................ Setting up 3GL under CICS ......................................................................................................... Create a Log File..................................................................................................................... CICS Definitions..................................................................................................................... IMS/TM as a Client Invoking an Application Adapter (z/OS Only) ..................................... Setting Up the IBM z/OS Machine............................................................................................. Setting Up a Call to the Transaction........................................................................................... Calling the Transaction ................................................................................................................ C Call ....................................................................................................................................... COBOL Call ............................................................................................................................ The Transaction Output ...............................................................................................................

85-25 85-25 85-26 85-26 85-26 85-27 85-28 85-28 85-28 85-29 85-30 85-31 85-32 85-33 85-33 85-33 85-33 85-34 85-34 85-35 85-36 85-36 85-37

86

JCA Client Interface


Overview................................................................................................................................................. Outbound Connections........................................................................................................................ Creating a Connection .................................................................................................................... Managed Connection Factory settings......................................................................................... JCA Client Interface.............................................................................................................................. Attunity JCA Enhancements ............................................................................................................... Attunity Metadata........................................................................................................................... Attunity Record ............................................................................................................................... Samples............................................................................................................................................. JCA Logging Mechanism ................................................................................................................... JCA Sample Program .......................................................................................................................... 86-1 86-1 86-1 86-2 86-3 86-5 86-5 86-6 86-7 86-15 86-17

87

JDBC Client Interface


Overview................................................................................................................................................. Connection.............................................................................................................................................. Creating a Connection .................................................................................................................... Connection String............................................................................................................................ Accessing Data Sources Directly................................................................................................... Data Types .............................................................................................................................................. JDBC and Java Type Mapping ...................................................................................................... Conversions Between Java Object Types and Target SQL Types ............................................ Using the getXXX Methods to Retrieve Data Types .................................................................. 87-1 87-2 87-2 87-2 87-6 87-6 87-7 87-8 87-8

xliv

JDBC API Conformance....................................................................................................................... Supported Interfaces....................................................................................................................... Supported Classes ......................................................................................................................... DataSource Properties .................................................................................................................. ConnectionPool Data Source and XADatasource Interface Properties................................. Connection Pooling Properties ................................................................................................... JDBC Client Interface......................................................................................................................... JDBC Sample Program .......................................................................................................................

87-9 87-9 87-19 87-20 87-21 87-22 87-22 87-23

88

ODBC Client Interface


Connection.............................................................................................................................................. Creating an ODBC Connection ..................................................................................................... Defining a DSN................................................................................................................................ The Opening Page.................................................................................................................... Local Authentication Page...................................................................................................... Local Binding Information Page ............................................................................................ Remote Server Authentication Page ..................................................................................... Remote Server Binding Page .................................................................................................. Advanced Settings Page.......................................................................................................... Final Page .................................................................................................................................. Defining a File DSN ........................................................................................................................ Connection String Parameters..................................................................................................... ODBC Client Interface ....................................................................................................................... Supported Interfaces..................................................................................................................... ODBC Schema Rowsets................................................................................................................ ODBC Data Types ......................................................................................................................... Supported Options........................................................................................................................ ODBC API Conformance................................................................................................................... Minimum Requirements of an ODBC Provider ....................................................................... Asynchronous Execution ............................................................................................................. General Information ..................................................................................................................... Conformance Information ........................................................................................................... SQL Syntax Information............................................................................................................... Platform Specific Information .......................................................................................................... Support for Non-C Applications on Platforms Other than Windows .................................. ODBC Client Interface Under CICS (z/OS Only) .................................................................... Sample Programs .......................................................................................................................... Environment Variables....................................................................................................................... 88-1 88-1 88-2 88-3 88-4 88-5 88-6 88-7 88-8 88-9 88-9 88-10 88-14 88-14 88-15 88-16 88-17 88-18 88-20 88-21 88-21 88-22 88-22 88-22 88-22 88-23 88-24 88-25

89

OLE DB (ADO) Client Interface


Overview................................................................................................................................................. Methods and Properties ....................................................................................................................... ADO Connect String............................................................................................................................. Connect String Parameters ............................................................................................................ Optimizing ADO................................................................................................................................... ADO Schema Recordsets ................................................................................................................... 89-1 89-1 89-4 89-4 89-8 89-10

xlv

OLE DB Data Types ............................................................................................................................ Mapping SQL Data Types to OLE DB Data Types .................................................................. ADO Conformance Level .................................................................................................................. OLE DB Interfaces and Methods ................................................................................................ OLE DB Properties ........................................................................................................................ Initialization Properties......................................................................................................... Data Source Properties .......................................................................................................... Data Source Information Properties.................................................................................... Session Properties .................................................................................................................. Rowset Properties .................................................................................................................. Specific Properties..................................................................................................................

89-10 89-12 89-12 89-13 89-15 89-15 89-15 89-16 89-17 89-17 89-19

90

XML Client Interface


Overview of the XML Client Interface.............................................................................................. ACX Verbs............................................................................................................................................... ACX Request and Response Documents ..................................................................................... Request Document................................................................................................................... Response Document ................................................................................................................ Connection Verbs ............................................................................................................................ The Connect Verb..................................................................................................................... The setConnection Verb .......................................................................................................... The disconnect Verb ................................................................................................................ The reauthenticate Verb .......................................................................................................... The cleanConnection Verb...................................................................................................... Transaction Verbs............................................................................................................................ The setAutoCommit Verb....................................................................................................... The transactionStart Verb ....................................................................................................... The transactionPrepare Verb.................................................................................................. The transactionCommit Verb ................................................................................................. The transactionRollback Verb ................................................................................................ The transactionRecover Verb ................................................................................................. The transactionForget Verb .................................................................................................. The transactionEnd Verb ...................................................................................................... The Execute Verb........................................................................................................................... Metadata Verbs.............................................................................................................................. The getMetadataItem Verb................................................................................................... The getMetadataList Verb .................................................................................................... The Ping Verb ................................................................................................................................ The Exception Verb....................................................................................................................... The Exception Element.......................................................................................................... Setting XML Transports for AIS ....................................................................................................... Passing XML Documents via TCP/IP ....................................................................................... Passing XML Documents via HTTP (Using the NSAPI Extension) ...................................... 90-1 90-1 90-2 90-2 90-2 90-2 90-3 90-4 90-5 90-5 90-6 90-6 90-7 90-7 90-8 90-8 90-9 90-9 90-10 90-10 90-11 90-12 90-12 90-14 90-15 90-16 90-16 90-18 90-18 90-18

Part XIV

Appendixes

xlvi

NAVDEMO - Attunity Demo Data


NAVDEMO Overview............................................................................................................................ NAVDEMO Database............................................................................................................................. NAVDEMO Tables .................................................................................................................................. TPART Table...................................................................................................................................... SUPPLIER Table ................................................................................................................................ PARTSUPP Table .............................................................................................................................. CUSTOMER Table ............................................................................................................................ TORDER Table .................................................................................................................................. LINEITEM Table ............................................................................................................................... NATION Table .................................................................................................................................. REGION Table ................................................................................................................................... A-1 A-1 A-2 A-2 A-2 A-3 A-3 A-3 A-4 A-4 A-5

Attunity SQL Syntax


Syntax Diagrams Describing SQL ....................................................................................................... SELECT Statement .................................................................................................................................. Keywords and Options .................................................................................................................... FROM Clause..................................................................................................................................... Keywords and Options .................................................................................................................... WHERE Clause................................................................................................................................ Keywords and Options .................................................................................................................. GROUP BY and HAVING Clause ................................................................................................ Keywords and Options .................................................................................................................. ORDER BY Clause........................................................................................................................... Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Set Operators on SELECT Statements .......................................................................................... Keywords and Options .................................................................................................................. SELECT XML Statement ...................................................................................................................... Keywords and Options .................................................................................................................. Batch Update Statements ..................................................................................................................... INSERT Statement........................................................................................................................... Keywords and Options .................................................................................................................. UPDATE Statement ........................................................................................................................ Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Updateability Rules ................................................................................................................. DELETE Statement.......................................................................................................................... Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Updateability Rules ................................................................................................................. TABLE, INDEX CREATE and DROP Statements ........................................................................... CREATE TABLE Statement ........................................................................................................... Keywords and Options .................................................................................................................. DROP TABLE Statement................................................................................................................ Keywords and Options .................................................................................................................. B-2 B-3 B-3 B-5 B-5 B-11 B-11 B-12 B-12 B-13 B-14 B-14 B-15 B-16 B-17 B-17 B-18 B-18 B-18 B-19 B-20 B-23 B-23 B-24 B-24 B-28 B-28 B-28 B-28 B-28 B-30 B-31

xlvii

CREATE INDEX Statement ........................................................................................................... Keywords and Options .................................................................................................................. VIEW Statements................................................................................................................................... CREATE VIEW Statement ............................................................................................................. Keywords and Options .................................................................................................................. DROP VIEW Statement .................................................................................................................. Keywords and Options .................................................................................................................. Stored Procedure Statements .............................................................................................................. CREATE PROCEDURE Statement ............................................................................................... Keywords and Options .................................................................................................................. DROP PROCEDURE Statement.................................................................................................... Keywords and Options .................................................................................................................. CALL Statement .............................................................................................................................. Keywords and Options .................................................................................................................. Synonym Statements ............................................................................................................................ CREATE SYNONYM Statement ................................................................................................... Keywords and Options .................................................................................................................. DROP SYNONYM Statement........................................................................................................ Keywords and Options .................................................................................................................. GRANT Statement ................................................................................................................................ Keywords and Options .................................................................................................................. Transaction Statements......................................................................................................................... BEGIN Statement ............................................................................................................................ COMMIT Statement........................................................................................................................ ROLLBACK Statement ................................................................................................................... Constant Formats ............................................................................................................................ Expressions....................................................................................................................................... Operator Precedence ............................................................................................................... SIngle Quotation Marks in String Expressions.................................................................... Functions................................................................................................................................................. Aggregate Functions....................................................................................................................... Additional Information........................................................................................................... Conditional Functions .................................................................................................................... Data Type Conversion Functions ................................................................................................. Date and Time Functions ............................................................................................................... Date Format .............................................................................................................................. Time Format.............................................................................................................................. Timestamp Format................................................................................................................... Date Comparison Semantics .................................................................................................. Numeric Functions and Arithmetic Operators........................................................................... String Functions............................................................................................................................... Parameters............................................................................................................................................... Search Conditions and Comparison Operators............................................................................... Keywords and Options .................................................................................................................. Passthru Query Statements (bypassing Query Processing) .......................................................... Keywords and Options .................................................................................................................. Reserved Keywords ..............................................................................................................................

B-31 B-31 B-32 B-32 B-33 B-34 B-34 B-34 B-34 B-35 B-37 B-37 B-37 B-37 B-39 B-39 B-39 B-40 B-40 B-41 B-41 B-41 B-41 B-42 B-42 B-42 B-42 B-43 B-43 B-43 B-44 B-44 B-45 B-46 B-47 B-47 B-47 B-47 B-49 B-49 B-50 B-52 B-52 B-53 B-56 B-57 B-58

xlviii

National Language Support (NLS)


Codepage Terminology .......................................................................................................................... Basic NLS Settings .................................................................................................................................. Globally Setting Language at the System Level ............................................................................... Working with Multiple Languages (UTF Codepage)....................................................................... NLS and XML and Java Encoding........................................................................................................ NLS Support at the Field Level ............................................................................................................ Special Daemon Language Considerations ....................................................................................... Support for 7-Bit Codepages ................................................................................................................. SQL Functions For Use With Graphic Strings................................................................................... C-2 C-2 C-5 C-5 C-6 C-6 C-7 C-8 C-9

D E

COBOL Data Types to Attunity Data Types Editing XML Files in Attunity Studio
Preparing to Edit XML Files in Attunity Studio ............................................................................... Making Changes to the XML File ........................................................................................................ Remove Objects ................................................................................................................................. Add DTD Information...................................................................................................................... Edit Namespaces ............................................................................................................................... Add Elements and Attributes.......................................................................................................... Replace an Element ........................................................................................................................... E-1 E-2 E-2 E-2 E-3 E-5 E-5

Index

xlix

Send Us Your Comments


AIS User Guide and Reference, Version 5.1
AIS5100

Attunity welcomes your comments and suggestions on the quality and usefulness of this publication. Your input is an important part of the information used for revision.

Did you find any errors? Is the information clearly presented? Do you need more information? If so, where? Are the examples correct? Do you need more examples? What features did you like most about this manual?

If you find any errors or have any other suggestions for improvement, please indicate the title and part number of the documentation and the chapter, section, and page number (if available). You can send comments to us in the following ways:

Electronic mail: support@attunity.com FAX: (781) 213-5240. Attn: Documentation and Training Manager Postal service:

Attunity Incorporated Documentation and Training Manager 70 Blanchard Road Burlington, MA 01803 USA If you would like a reply, please give your name, address, telephone number, and electronic mail address (optional). If you have problems with the software, please contact your local Attunity Support Services.

li

lii

Preface
This guide is the primary source of user and reference information on AIS (Attunity Integration Suite), which enables integration of data across platforms and formats. This document applies to the IBM z/OS Series, OS/400, OpenVMS, UNIX, and Windows platforms. This preface covers the following topics:

Audience Organization Related Documentation Conventions

Audience
This manual is intended for Attunity integration administrators who perform the following tasks:

Installing and configuring the Attunity Integration Suite Diagnosing errors Using AIS to access data
Note:

You should understand the fundamentals of data base use and Microsoft screens operating system before using this guide to install or administer the Attunity Integration Suite (AIS).

liii

Organization
This document contains: Part I, "Getting Started with AIS" Part II, "Attunity Connect" Part III, "Attunity Stream" Part IV, "Attunity Federate" Part V, "Attunity Studio" Part VI, "Operation and Maintenance" Part VII, "Utilities" Part VIII, "Data Source Reference" Part IX, "Procedure Data Source Reference" Part X, "Adapters Reference" Part XI, "Non-Application Adapters Reference" Part XII, "CDC Agents Reference" Part XIII, "Interface Reference" Part XIV, "Appendixes"

Related Documentation
Printed documentation is available for at the Attunity Web site:
http://www.attunity.com/

You can download release notes, installation documentation, white papers, or other types of documentation. You must register online before downloading any documents.

Conventions
This section describes the conventions used in the text and code examples of this documentation set. It describes:

Conventions in Text Conventions in Code Examples Conventions for screens Operating Systems

liv

Conventions in Text
We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use.
Convention Bold Meaning Example

When you specify this clause, you create an Bold typeface indicates terms that are defined in the text or terms that appear in index-organized table. a glossary, or both. Italic typeface indicates book titles or emphasis. Oracle Database Concepts Ensure that the recovery catalog and target database do not reside on the same disk. You can specify this clause only for a NUMBER column. You can back up the database by using the BACKUP command. Query the TABLE_NAME column in the USER_ TABLES data dictionary view. Use the DBMS_STATS.GENERATE_STATS procedure. Enter sqlplus to open SQL*Plus. The password is specified in the orapwd file. The department_id, department_name, and location_id columns are in the hr.departments table. Set the QUERY_REWRITE_ENABLED initialization parameter to true. Connect as oe user.

Italics

UPPERCASE monospace (fixed-width) font

Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, RMAN keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles. Lowercase monospace typeface indicates executables, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names, and connect identifiers, as well as user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values.

lowercase monospace (fixed-width) font

Note: Some programmatic elements use a The JRepUtil class implements these mixture of UPPERCASE and lowercase. methods. Enter these elements as shown. Lowercase italic monospace font lowercase represents placeholders or variables. italic monospace (fixed-width) font You can specify the parallel_clause. Run Uold_release.SQL where old_ release refers to the release you installed prior to upgrading.

Conventions in Code Examples


Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example:
SELECT username FROM dba_users WHERE username = 'MIGRATE';

The following table describes typographic conventions used in code examples and provides examples of their use.
Convention [ ] { } Meaning Brackets enclose one or more optional items. Do not enter the brackets. Example DECIMAL (digits [ , precision ])

Braces enclose two or more items, one of {ENABLE | DISABLE} which is required. Do not enter the braces.

lv

Convention |

Meaning

Example

A vertical bar represents a choice of two {ENABLE | DISABLE} or more options within brackets or braces. [COMPRESS | NOCOMPRESS] Enter one of the options. Do not enter the vertical bar. Horizontal ellipsis points indicate either:

...

That we have omitted parts of the code that are not directly related to the example That you can repeat a portion of the code

CREATE TABLE ... AS subquery; SELECT col1, col2, ... , coln FROM employees;

. . .

Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example.

SQL> SELECT NAME FROM V$DATAFILE; NAME -----------------------------------/fsl/dbs/tbs_01.dbf /fs1/dbs/tbs_02.dbf . . . /fsl/dbs/tbs_09.dbf 9 rows selected.

Other notation

You must enter symbols other than acctbal NUMBER(11,2); brackets, braces, vertical bars, and ellipsis acct CONSTANT NUMBER(4) := 3; points as shown. Italicized text indicates placeholders or variables for which you must supply particular values. Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase. Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files. Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. CONNECT SYSTEM/system_password DB_NAME = database_name SELECT last_name, employee_id FROM employees; SELECT * FROM USER_TABLES; DROP TABLE hr.employees;

Italics

UPPERCASE

lowercase

SELECT last_name, employee_id FROM employees; sqlplus hr/hr CREATE USER mjones IDENTIFIED BY ty3MU9;

Conventions for screens Operating Systems


The following table describes conventions for screens operating systems and provides examples of their use.

lvi

Convention

Meaning

Example

c:\winnt"\"system32 is the same as File and directory File and directory names are not case names sensitive. The following special characters C:\WINNT\SYSTEM32 are not allowed: left angle bracket (<), right angle bracket (>), colon (:), double quotation marks ("), slash (/), pipe (|), and dash (-). The special character backslash (\) is treated as an element separator, even when it appears in quotes. If the file name begins with \\, then screens assumes it uses the Universal Naming Convention. C:\> Represents the screens command prompt C:\attunity\NACV_UTIL> of the current hard disk drive. The escape character in a command prompt is the caret (^). Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this manual. C:\>exp scott/tiger TABLES=emp QUERY=\"WHERE job='SALESMAN' and sal<1600\" C:\>imp SYSTEM/password FROMUSER=scott TABLES=(emp, dept)

Special characters The backslash (\) special character is sometimes required as an escape character for the double quotation mark (") special character at the screens command prompt. Parentheses and the single quotation mark (') do not require an escape character. Refer to your screens operating system documentation for more information on escape and special characters.

lvii

lviii

Whats New
AIS, the Attunity Integration Suite, Version 5.1 introduces significant enhancements, support commitments and bug fixes, and continued improvements for ease of use.

Continuous CDC
Beginning with version 5.1, Attunity Stream can provide ETL tools or programs to use standard SQL to query for data changes by continuously feeding change records for processing, effectively working in real-time. This provides an alternative approach to traditional ETL processing that works by executing jobs periodically (for example, every 15 minutes), which adds latency between each pass. This provides a solution for data integration scenarios that require very low latency (near real-time). This ability is executed by a simple SQL statement. See Reading Change Tables Continuously.

Bulk Data Performance Improvements


Beginning with version 5.1, AIS has improved bulk data access from Oracle databases, supporting ETL and reporting applications that require access to large volumes of data. Similar enhancements are also available when using third-party ODBC drivers (for example, Teradata). Additional improvements were made for OLEDB users, such as those using SQL Server Integration Services (SSIS).

64-Bit Support for AIS Server on Windows


Beginning with version 5.1, AIS has native XP 64-bit support. AIS runs natively on Windows XP x64 operating systems and on Windows Server 2003 x64 and IA64 (Itanium) systems. The new kits support the following Attunity Connect features:

Generic Attunity Connect features, including the Procedure Driver, LegacyPlug, Database Adapter, and Query Adapter SQL Server 2005 Driver Oracle Driver Oracle CDC Agent

Note: The installation of the new 64-bit Attunity Server kits for Windows include the 32-bit Server components. This addresses the various options on the Microsoft

lix

platform, where some applications still require 32-bit functionality although running on 64-bit systems. See the installation guide for the operating system you are working with for more information.

64-Bit Thin ODBC Clients


Beginning with version 5.1, AIS has native 64-bit support for Thin ODBC Clients on the following platforms:

Windows x64 Windows IA64 HP UX (Itanium) AIX Solaris Linux

Note: Windows 64-bit clients have an installation utility that also installs a new ODBC setup wizard. All other thin clients are distributed as Zip files.

New ODBC DSN Setup Wizard


Beginning with version 5.1, a new ODBC DSN setup wizard is available. This wizard allows you to create a new ODBC user, system and file data Sources. It supports 32-bit and 64-bit Windows platforms. The setup wizard lets you create both local and remote connections.

Enhancements for the ADO.NET Client


Beginning with version 5.1, AIS introduces significant enhancements to the ADO.NET client:. These enhancements include:

Support for ADO.NET 2.0 APIs including the extended API set, providing richer capabilities to developers and supporting applications that use these APIs. The ADO.NET client is now merged with NETACX capabilities. This allows the ADO.NET client to interact with Application Adapters and exchange hierarchical XML documents with back-end systems. The ADO.NET client now supports design-time integration with Visual Studio 2005. Users can work with the Visual Studio Server Explorer component and use the standard Data Connections option to add a connection to an AIS data source using an integrated Add Connection dialog box. In this way, users can define datasets in the integrated Visual Studio environment and define the Attunity Connect metadata definitions.

Update Index Statistics Improvements


Query performance, especially in distributed queries, can be optimized using accurate information about the statistics of table indexes. This information needs to be updated periodically as table content changes (for example, when rows are added).

lx

Version 5.1 improves the accuracy and performance of the Attunity Connect Update Statistics utility. Using a new algorithm, the Update Statistics utility can now estimate statistics on very large tables more accurately, and much faster.

Better Support for Oracle Stored Procedures


Beginning in version 5.1, Attunity Connect can now return one or more result sets when calling Oracle stored procedures. See Oracle Stored Procedures.

Support for Exact Arithmetic


The AIS query processor uses double-precision floating point numbers for SUM() aggregations, additions, and subtractions.Because of imprecision with floating point numbers, arithmetic operations over large sets of numbers may result in visible errors in the least significant digits. Beginning with version 5.1, new parameters in the binding lets users configure fixed scale for precise arithmetic operations with double-precision floating point numbers, providing a high precision solution.

Improvements to Attunity Studio


Beginning in version 5.1, Attunity Studio has a new and easier interface for the key editors and wizards, such as those used for bindings and daemons. The editors are more intuitive and are easier to use. They include a new look and feel that makes using Attunity Studio more intuitive. Beginning in version 5.1, Attunity Studio has new robust XML editor. This editor makes advanced configuration on XML metadata and configuration files easier, by replacing the text editor available in previous versions with an XML-sensitive editor. This editor can be launched directly from the Design perspective Configuration view. Beginning in version 5.1, Attunity Studio has an easier to use import/export dialog box. The new Import wizard lets users to review and edit the XML definitions before finalizing the import process. The import process has improved validation of the configuration structure.

lxi

lxii

Part I
Getting Started with AIS
This part contains the following topics:

Introducing the Attunity Integration Suite Setting up Attunity Connect, Stream, and Federate Binding Configuration Setting up Daemons Managing Metadata Working with Metadata in Attunity Studio Handling Arrays Using SQL Working with Web Services

1
Introducing the Attunity Integration Suite
This section describes the Attunity Integration Suite (AIS) and its benefits. It contains the following topics:

AIS Overview AIS Use

AIS Overview
The Attunity Integration Suite (AIS) is a comprehensive integration platform for on-demand access and integration of enterprise data sources and legacy applications. AIS runs on many platforms, such as Windows, UNIX, OS/400, and mainframes and provides the many integration possibilities. The suites integration services under the banner are provided in the following programs:

Attunity Connect: Universal, Standard Data Access to enterprise data sources. Attunity Federate: Virtual Data Federation (EII), integrating data on the fly from heterogeneous sources. Attunity Stream: Change data capture, allowing efficient and real-time data movement and processing.

The following diagram provides an overview of AIS and how they can be used to solve many integration needs for each enterprise platform. It also includes the Attunity Studio, a GUI-based tool that lets you configure the Attunity Servers in your system.
Figure 11 AIS Overview

Introducing the Attunity Integration Suite 1-1

The Attunity Integration Suite (AIS) provides a modular solution that allows organizations to address different tactical requirements quickly, while relying on a comprehensive platform that allows reusability and addresses many needs. The following section provides some examples on the uses for AIS and its benefits.

AIS Use
The Attunity Integration Suite provides many integration possibilities. This section describes some uses, related applications, or projects, and how AIS simplifies the access and integration. The following are some of the common uses for AIS:

SQL Connectivity Application Connectivity Adapters Change Data Capture

SQL Connectivity
SQL is known skill set for application developers and a common interface that applications know and use in order to retrieve data. While relational databases provide SQL connectivity out of the box, legacy data sources and file systems do not. Attunity Connect can help in this area by making older non-relational data sources appear relational and providing access to them using standard SQL. Typical applications that require SQL connectivity include:

Reporting tools: for designing and providing reports to business users J2EE or .NET applications ETL tools: for bulk loading of source data

Some typical scenarios that use Attunity Connect for SQL Connectivity include connecting to Adabas, VSAM, IMS/DB, RMS, Enscribe, and ISAM data sources.

Application Connectivity
XML is now a standard interface for applications, by using standard APIs or Web Services. Many newer applications offer open interfaces, however legacy applications do not and interfacing with their embedded business logic is difficult. Attunity Connect defines virtual services on top of these legacy applications that provide seamless interoperability. Applications that require Service/XML-based application connectivity include:

EAI tools: for invoking business logic as part of an automated process J2EE or .NET applications: that need to reuse existing business logic Legacy Applications: that need to be extended and call off-platform services

Some typical scenarios that use Attunity Connect for application connectivity include CICS, IMS/TM, Tuxedo, COBOL, and RPG.

1-2 Attunity Integration Suite Service Manual

Adapters
Many enterprise application integration (EAI), enterprise service buses (ESB), and business process manager (BPM) tools need to integrate with existing applications and data sources. Attunity Connect removes the barrier to integrating with legacy applications and data sources by providing standard adapter interfaces and plug-ins to leading adapter frameworks. Adapters include inbound and outbound capabilities, that allow you to send messages to the adapter or receive messages from the adapters. Typical applications that require adapters include:

Integration Brokers: such as BizTalk Server, Oracle BPEL, BEA WLI, etc. ESB and BPM platforms

Typical usage scenarios employing Attunity Connect include application connectivity to CICS, IMS/TM, Tuxedo, COBOL, and RPG, as well as to enterprise data sources.

Change Data Capture


Data integration projects, especially data warehousing and data synchronization, and propagation projects must take both latency and efficiency into consideration. Users need fresher data with lower latency (how old the data is), and at the same time must deal with the efficiencies associated with moving and processing large amounts of data. Latency, data volumes, and shrinking batch windows are all barriers to data integration. Attunity Stream removes these barriers by providing an efficient way to processing only the changes to enterprise data source. Typical applications that require change data capture include:

ETL - for Data Warehousing and complex data movement Data Replication - for rehosting data (for example, to enable reporting) Data Synchronization - for maintaining integrity between systems

Typical tools that are used include IBM WebSphere DataStage, Microsoft SQL Server Integration Services (SSIS), Oracle Warehouse Builder, Business Objects Data Integrator, and Sunopsis.

Attunity Integration Suite Supported Systems and Resources


The section describes AIS support for the following:

Operating Systems Data Sources and Adapters Interfaces

Operating Systems
OS Supported versions

Windows x86 (32-bit) Windows 2000, XP, Vista, and Windows Server 2003 Windows x64 (64-bit) Windows XP, and Windows Server 2003 Windows IA64 (Itanium) Linux RedHat Windows XP, Windows Server 2003 AS 3.0 through 5.0.

Introducing the Attunity Integration Suite 1-3

OS AIX Solaris HP UX

Supported versions Versions 5.2, 5.3 Version 2.8-2.10 11.11 (11i v1) and above only. Itanium systems are now supported by most Attuniy data sources. For HP UX on Itanium supported versions are 11.23, 11.31

OpenVMS (Alpha) OpenVMS (Itanium) NonStop (Himalaya) NonStop (Itanium) OS/400 z/OS

Versions 6.2-8.3 Versions 8.2-1 to 8.3 G06.08 to G06.26 H-Series Versions 5.1 to 5.3 Versions 1.1 through 1.8. Beginning with version 5.1.

Data Sources and Adapters


Data source Data Sources Adabas/MVS Adabas/UNIX Adabas/Windows DB2/MVS DB2/UDB DBMS IMS/DB ODBC Informix Ingres II Oracle Oracle RDB SQL Server Sybase Adapters IMS/TM Tuxedo Versions 6.1, 7.1, 8.1, 9.1 Versions 8.0, 8.1, 9.1 Versions 7.x through 9.x. Versions 8.x through 9.x. Version 4.2, 7 Versions 6.1, 7.1, 8.1, 9.1, 10 ODBC 2.5 Versions 10 and 11 Versions 2-2.56 Versions 9iR2, 10g, and 11gR1 Versions 7.1-7.4 SQL Server 2000 (32-bit only), SQL Server 2005 (32-bit and 64-bit) Versions 12.5 and 15. Versions 6.22 through 7.4 and version 8.1 (in single-user mode, only) Versions 3.3, 4.1, 5.1 and 6.1 Supported versions

Interfaces
Interface ADO.NET Supported versions .NET 2.0

1-4 Attunity Integration Suite Service Manual

Interface JDBC ODBC OLE/DB and ADO JCA

Supported versions Version 2.0 Version 2.5 Version 2.5 Versions 1.0, 1.5

Introducing the Attunity Integration Suite 1-5

1-6 Attunity Integration Suite Service Manual

2
Setting up Attunity Connect, Stream, and Federate
This section contains the following topics:

Overview Setting up Machines Administration Authorization License Management Importing and Exporting XML in Attunity Studio

Overview
Attunity Studio is used to configure and manage access to applications, data, and events on all machines running AIS. You make configurations in the Design Perspective. This perspective has tabs for configuration and metadata. These tabs enable the following configuration:

Setting up access to machines running AIS. Configuring the daemon, which manages communication between AIS machines. Setting up the access to the application, data or event on these machines. Configuring metadata: To manage Attunity Metadata (ADD) for data sources that do not have metadata (such as DISAM), or have metadata that cannot be used by AIS. To view relational metadata. To extend metadata in ADD for relational data sources that require additional information not supplied by the native metadata (such as statistics for various relational data sources). To manage a snapshot of relational metadata converted to ADD. This is also called local copy metadata. To manage Application Adapter definitions (adapter metadata). To manage event definitions (event metadata).

You manage Attunity products in the Attunity Studio Runtime Manager perspective. Management includes the following:

Changing configuration settings when required.

Setting up Attunity Connect, Stream, and Federate

2-1

Managing daemons and workspaces on any machine during runtime

A perspective consists of views and an editor area. Views are used to navigate and manage resources. The editor area is used to carry out the main tasks.
Figure 21 Studio Main Screen

Note:

You can use the properties that are displayed in the editor to find a machine where a data source or adapter is located, especially when several data sources or adapters on different machines have similar names. Identifying the location is useful when working on the Metadata tab of the Design Perspective.

To switch between the Design time and Runtime Manager perspectives, click Perspective (at the top right of the workbench) and select the required perspective from the list. For a list of buttons, see Workbench Icons.

Opening Attunity Studio


To start work with the Attunity Integration Suite (AIS), open Attunity Studio. As described in the Overview, you can make the configurations that you need with this tool. To open Attunity Studio Do one of the following: From the Start menu, select Programs; then Attunity Integration Suite and select Attunity Studio to open Attunity Studio. Double-click the Attunity Studio shortcut on the desktop.

2-2 AIS User Guide and Reference

Special Note:

To work with solutions in Attunity Studio, when using Turkish, add the following switches to the Target path in the Attunity Studio shortcut properties: -nl en

For example: "C:\Program Files\Attunity\Studio1\studio.exe -nl en" When you open Attunity Studio for the first time, the Welcome Screen is displayed.

Setting up Machines
You use the Design perspective Configuration view to configure AIS Machines, and the applications, data, and events on those machines. To add a machine 1. Open Attunity Studio.
2.

In the Design perspective Configuration view, right-click the Machines folder and select Add Machines. The Add machine screen opens.

Figure 22 The Add Machine Screen

Enter the following information in each field:

Host name/IP address: Enter the name of the machine on the network or click Browse to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the number of the port where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional).

Setting up Attunity Connect, Stream, and Federate

2-3

User name: Enter the user name of the machines administrator.


Note:

You indicate the machines administrator when the machine is installed or by using ADD_ADMIN operation in the NAV_UTIL utility.

Password: Enter the password of the machines administrator. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field empty. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.

1. 2.

To edit a machine In Attunity Studio, in the Configuration view of the Design perspective, expand the Machine folder. Right-click the machine you want to edit and select Open. The Machine editor with the information for the existing binding opens in the editor area. You can make the following changes:

Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. Anonymous login: Select this if users can access the machine without password authentication. When this is selected the User name and Password fields are not available. User name: Enter the machine administrators user name. Password: Enter the machine administrators password. This is the password for the user entered in the User name field.

After you add a machine, it is displayed in the Configuration view under the Machine folder. You can edit the machines login information, or configure bindings, daemons, and users for each machine. For more information, see:

Setting up Data Sources and Events with Attunity Studio Setting up Daemons User Profiles and Managing a User Profile in Attunity Studio

Using an Offline Design Machine to Create Attunity Definitions


The offline design mode enables you to define the resources to AIS in Attunity Studio without having to connect to the actual machine where the definitions are implemented. For example, you can set up a machine even when the actual server machine is down, or set up a number of definitions for different machines on the same design machine. After the resources are defined on the design machine, you can drag and drop each definition to a server machine to implement them.

2-4 AIS User Guide and Reference

To define an offline machine 1. In the Configuration view of the Design perspective, right-click the Machines folder and select Add Offline Design Machine. The Add offline design machine screen opens.
2. 3.

Enter a name for the design machine. Click Finish.

You can define all available resources on this machine. You can also set up metadata using a metadata import utility. Every resource is available on the design machine, no matter on which platform the resource needs to exist. For example, both HP NonStop data sources and z/OS data sources are available, even though on completion, you can only drag and drop the NonStop definitions (such as an Enscribe data source) to an HP NonStop machine.

Administration Authorization
You can provide access to Attunity Studio for:

Administrators: People granted access as administrators can add and edit resources in all areas of AIS. Designers: People granted access as designers can add adapters, data sources, and CDC agents and create and edit definitions for them. Users: People granted access as users have read-only access to all resources in AIS.

1. 2.

To grant administrative authorization in Attunity Studio In Attunity Studio, in the Configuration view of the Design perspective, expand the Machines folder. Right-click the machine you want to grant privileges to and select Administrative Authorization. The Administrative Authorization editor opens in the editor area. The name of the machine that you are granting access privileges to is shown on the tab at the top of the editor.

Setting up Attunity Connect, Stream, and Federate

2-5

Figure 23 Administration Authorization Editor

3. 4.

Click the Everyone checkbox at the top of the Administrator, Designer, or User sections to allow all people who use AIS access as the selected type of user. Clear the check box at the top of one or more of the sections to grant specific people access to that area. For more information, see Granting Access to Specific Users and Groups.

Granting Access to Specific Users and Groups To grant a User or group access rights, add them to the list for the type of rights you want to grant them. For example, you can grant user1 administrator rights by adding user1 to the list in the administrator section. To grant rights to all users and groups, select the Everybody check box for any of the three sections. To add users or groups From the Administrators, Designers, or Users sections in the Administration Authorization editor, click Add user and enter the name of a valid user in the Add user screen. Make sure that the name entered matches a valid user account. To add groups to the list, click Add group and enter the name of a valid group in the Add group screen. Make sure that the name entered matches a valid group account.
2.

1.

Click OK to close the screen. The name of the user or group is added to the field.

2-6 AIS User Guide and Reference

To rename a user or group 1. From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the user or group you want to rename and click Rename.
2. 3.

Change the name entered in the Rename user or Rename group screen to the name you want to use. Click OK to close the screen. The changes are entered in the field.

1. 2.

To remove a user or group From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the use or group that you want to remove. Click Remove. The user or group is removed from the field.

License Management
Before you can work with any product in AIS, you must register the product. You can register the product with a valid license file. The following sections describe how you can use license management.

Registering a Product Viewing License Information

Registering a Product
You need to register the software before you can access data sources on a machine. Your Attunity vendor should provide you with a text file called license.pak.The PAK file contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, which drivers you are authorized to use, and other information. After you make sure that the PAK file is installed on your machine, you must register it before you can use the product.
Notes:

Make sure you are connected to the Internet before carrying out the following procedure. When you register a product, the new license will overwrite the old license. If you want to register a new product and continue using any previously registered products, then request a single license for all of the products you are using.

To register a product 1. In Attunity Studio, in the Configuration view of the Design perspective, Expand the Machine folder.

Setting up Attunity Connect, Stream, and Federate

2-7

2.

Right-click the machine with the license you want to register, point to License management, and select Register product. The Register Product screen opens.

Figure 24 Register Product

3.

Click Browse and browse to find the license (PAK) file. It is usually located in the directory where AIS is installed. The XML content of the license file is displayed in the screen.

4.

Click Register. Attunity Studio contacts the Attunity registration server. A message is displayed stating whether the registration was successful. Contact your Attunity vendor if there is a problem with the registration or if you do not have a license file.

Viewing License Information


You can use Attunity Studio to view the contents of a PAK file. A PAK file contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, and which drivers you are authorized to use. To view license information 1. In Attunity Studio, in the Configuration view of the Design perspective, Expand the Machine folder.
2-8 AIS User Guide and Reference

2.

Right-click the machine with the license you want to register, point to License management, and select View license information. The View License Registration screen opens.

Figure 25 View License Information

3.

Click Save as to create a new license file. You can also read the license information in this screen.

Importing and Exporting XML in Attunity Studio


All AIS definitions, such as Bindings, Daemons, adapters, and Data Sources, are saved as XML in DISAM files. You can back up the definitions by exporting the XML data to a separate file. You can also reload the definitions by importing the XML data back into AIS.
Note:

In some cases exporting XML data using Attunity Studio does not export all of the data. If this happens, you can use the IMPORT and EXPORT operations in the NAV_UTIL utility.

To import XML 1. In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:

Setting up Attunity Connect, Stream, and Federate

2-9

Machines (folder) Any specific machine Data Sources (folder) Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder)

2. 3. 4. 5.

Select Import XML definitions. The Import XML Definitions window opens. Click Browse to open the Import XML Definitions dialog box and browse to the file with the XML data you want to import. Click OK. The data opens at the bottom of the Import XML Definitions screen. Click Finish to close the screen and save the data.

1.

To export XML definitions In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:

Machines (folder) Any specific machine Data Sources (folder) Any specific data source Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder) Any specific daemon

2. 3. 4.

Select Export XML definitions. The Export XML Definitions window opens. Click Browse to browse to the location where you want to save the XML data. Click OK to save the data.

When you import or export definitions, the XML data for the level you are using is transferred, including the data for all the sublevels. For example, if you export data for a daemon, the data for the binding and all of its data sources, adapters, and events is transferred. However, if you export the data for an adapter, only the data for that adapter is transferred.

2-10 AIS User Guide and Reference

3
Binding Configuration
This section contains the following topics:

Binding Configuration Overview Setting up Bindings in Attunity Studio Binding Syntax Sample Binding Environment Properties

Binding Configuration Overview


The information that Attunity needs to access Applications, data sources, and events is defined in a Binding configuration. A binding configuration always exists on a Server Machine, where data sources and applications to be accessed using Attunity reside. Additionally, a binding configuration can be defined on a Client Machine to point to data sources on a server machine. When you access data using ADO, a binding configuration must exist on the client machine. When you access data using JDBC, ODBC or .NET, or applications using JCA, XML, COM or .NET, a binding configuration is not required on the client machine. In these cases, you must install the Attunity thin client kit.

Server Binding
A Binding configuration on a server includes the following:

Definitions for data sources that are accessed using Attunity Integration Suite (AIS). Shortcuts to data sources on other server machines that can be accessed from the current Machine. Application Adapter definitions for applications that can be accessed using AIS, including application-specific properties. Event queue definitions for event queues that are managed using AIS, including event-specific properties. Environment properties that apply to all the data sources, adapters, and machines listed in the binding configuration. For more information, see Environment Properties.

Binding Configuration 3-1

Client Binding
A Binding configuration on a client includes the following:

Shortcuts to data sources on other Server Machines that can be accessed from the current machine. Environment properties that apply to all the data sources, adapters, and Machines listed in the binding configuration. For details, see Environment Properties.

You use Attunity Studio for Adding Bindings and binding configurations or for Editing Bindings that are already in use. NAV is the default binding configuration. You can use this configuration to define all the data sources and adapters you want to access via AIS.

Setting up Bindings in Attunity Studio


The information that Attunity needs to access applications, data sources, and events is defined in a binding configuration. Bindings are configured on a server machine with the data sources and applications that you are working with. A binding configuration can also be defined on a client machine to point to data sources on a server machine.
Note:

The configuration supplied with the product installation includes the NAV binding. This configuration is used if a specific binding is not defined to access an application, data source, or event queue.

Use Attunity Studio to configure one or more bindings, each with a set of application adapters, data sources, and events. Each binding configuration has its own environment that defines the binding (such as cache sizes for storing information in memory during a session). The following sections describe the required tasks to define a binding:

Adding Bindings Editing Bindings

Adding Bindings
You can set up a number of different bindings. Each binding may be for a different set of applications, data sources or events. You can also create different binding configurations for the same application adapters, data sources or events, with each having a different set of requirements. For example, you can set up separate configurations that allow different users access to specific resources. To add a new binding 1. Open Attunity Studio.
2. 3.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add the binding.

3-2 AIS User Guide and Reference

Note:

You can add a new binding configuration in a design machine in the offline design mode and later drag and drop the binding to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.

4. 5.

Right-click the Bindings folder and select New Binding. Enter a name for the binding in the New Binding window.
Note: Attunity Studio does not support renaming a binding. Changing the binding name can cause problems with the data sources, adapters, and events for that binding.

In the event that you want to change the binding name, you must create a new binding and copy the data sources, adapters, and events to the new binding.
6.

Click Finish. The new binding editor opens. See Editing Bindings for information on entering information in the binding editor.

Editing Bindings
The binding editor is used to set the binding environment and to define remote machines for a binding. The following sections describe how to edit the binding:

Setting the Binding Environment in Attunity Studio Defining Remote Machines in a Binding

To open the binding editor 1. Open Attunity Studio.


2. 3.

In the Design perspective Configuration view, expand the Bindings folder. Right-click the binding you want to edit and select Open. The binding editor with the information for the existing binding opens in the editor area. Make any changes you want to the binding properties.

Setting the Binding Environment in Attunity Studio


You set the binding environment from the binding editor s Environment tab. The environment is a set of properties that govern how the parts of the binding work. The properties are shared by all adapters, data sources, and events in the binding. Follow these steps for setting the environment properties in a binding. To set an environment for the binding configuration 1. Right-click the binding you want to edit and select Open. The binding editor opens in the editor area.

Binding Configuration 3-3

Figure 31 Environment Properties

2.

On the Environment tab, you can do any of the following to work with the binding environment.

Select the Use NAV environment check box. When this is selected, the NAV environment is used for all of the Environment Properties. The property values are set to the NAV values at runtime. The editing controls for each property are disabled. If you want to change any of the Environment Properties, clear the check box. Click Copy NAV environment to use the default settings, which are the original settings for the NAV binding. This automatically sets all of the environment properties to the NAV values and it allows you to edit individual properties as needed. The property values are created as NAV values during design time. Click Restore default values to automatically restore the environment settings that were defined in the development environment. This means that the values will revert to the values as they were when the AIS solution was first deployed. All values that were changed in the current session or any previous session (including values changed with NAV_UTIL or Attunity Studio) will return to their original value. In addition, each section in the Environment Properties editor has its own Restore default values button. Click this button to restore the default values for the properties in that section only.

3.

Edit any of the environment properties displayed on the Environment tab. For an explanation of the properties, see Environment Properties.

Defining Remote Machines in a Binding


Every remote machine accessed by AIS must be defined in the remote machines binding configuration. The machines to be accessed are listed on the Machines tab of the binding editor.

3-4 AIS User Guide and Reference

To set up access to a remote machine 1. Right-click the binding you want to edit and select Open. The binding editor opens in the editor area.
2.

Click the Machines tab.

Figure 32 Machine Tab

3.

Click Add to open the Add Remote Machines dialog box.

Figure 33 Add Remote Machines Screen

Binding Configuration 3-5

4.

Enter the following information:

Host name/IP address: Enter the name of the machine on the network or click Browse, to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the port number where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional). User name: Enter the machine administrators user name.
Note:

Define the machine administrator when the machine is installed using Attunity Studio or by using the ADD_ADMIN operation in NAV_UTIL.

Password: Enter the machine administrators password. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field blank. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.

5.

Click OK. If the users with access to the remote machines are known, you can set the user profile for the machine by clicking Security on the Machines tab. If the users are not known, you can define them later from the Users folder in the Configuration view. For more information, see Managing a User Profile in Attunity Studio.

Binding Syntax
The Binding settings in XML format include the following statements:

A <remoteMachines> Statement, specifying the remote machines that can be accessed from the current machine by using <remoteMachine> statements. A <datasources> Statement, specifying the data sources that can be accessed by using <datasource> statements. This statement specifies the following: A name to identify the data source The data source type General information <config> statements, specifying specific properties for the data source driver

An <environment> statement, specifying the properties for the specific binding. For details, see Environment Properties. An <adapters> Statement specifying the Application Adapters that can be accessed using the <adapter> statement. This statement specifies the following: <adapter> Statement, specifying a name to identify the Application and the application type.

3-6 AIS User Guide and Reference

<config> statements, specifying specific properties for an application adapter.

See Sample Binding for an XML example of a binding.

<remoteMachines> Statement
The <remoteMachines> statement lists the names of the accessible servers, using <remoteMachine> statements. These statements are only necessary when you connect to data sources through a shortcut on the client machine. In other cases (such as accessing an application, or a data source using JDBC), the location is specified as part of the Connect String.

<remoteMachine> Statement
The <remoteMachine> statement lists names and IP addresses of the remote machines with data sources and are accessed using data source shortcuts on the current machine. The names are used as aliases for the IP addresses in the <datasource> statements. This enables you to redefine the location of a group of data sources (on a given machine) by changing the IP address associated with this alias. The format is:
<remoteMachine name="alias" address="address" port="port_number" workspace="workspace" encryptionProtocol="RC4|DES3" firewallProtocol="none|nat|fixednat"/>

Where:

name: The name of the remote machine that is recognized by AIS. The names maximum length is 32 characters, and it must start with a character. This name cannot be the name of a data source specified in a <datasources> statement.
Note:

The name does not need to relate to the name of the machine on the network.

address: The IP address of the remote machine. port: The port on the remote machine where the AIS Daemon is running. If you do not specify a port number, the system allocates the default server port, 2551. workspace: The specific working configuration specified for this binding by the daemon. A Workspace must be defined in the daemon configuration on the remote machine. encryptionProtocol: The protocol used to encrypt network communications. AIS currently supports the RC4 and DES3 protocols. firewallProtocol: The firewall protocol used. Valid values are none, nat, or fixednat. The default is none. NAT (Network Address Translation) is a firewall protocol where internal IP addresses are hidden. It enables a network to use one set of IP addresses for internal traffic and a second set of addresses for external traffic. NAT translates all necessary IP addresses. However, using NAT requires every access by every client to go through the daemon port, even after a specific server process has been assigned to handle the client. Specifying fixednat for this parameter sets AIS to access this remote machine through a firewall using NAT with a fixed IP address. When the server address is returned to the client and

Binding Configuration 3-7

the client sees that the IP is not the IP of the daemon, it ignores the IP and uses the daemon's IP instead. It is recommended to use fixednat to access data via a firewall. For more information, see Firewall Support.
Example 31 <remoteMachines> statement <remoteMachines> <remoteMachine name="ALPHA_ACME_COM" address="alpha.acme.com" /> <remoteMachine name="SUN_ACME_COM" address="sun.acme.com port="8888" workspace="PROD" /> </remoteMachines>

<adapters> Statement
This statement lists the accessible application adapters using the <adapter> statement.

<adapter> Statement
An <adapter> statement specifies the name and properties of an AIS application adapter. The basic format is as follows:
<adapter name="name" type="type" definition="definition_name"> <config .../> </adapter>

Where:

name: The name of the adapter. The maximum length is 32 characters. type: The type of the application adapter to be accessed. This value is different for each application adapter. Refer to a specific application adapter for the value of this parameter. definition: The name of the adapter metadata used to describe the adapter. If the value here is the same as the adapter name, it can be omitted.
Note:

Some adapters have an internal definition, and a value here must be omitted.

<config> Statement
A <config> statement specifies the configuration properties of an application adapter. The configuration information is specific to each adapter type. The basic format is as follows:
<adapter name="name" type="type"> <config attribute="value" attribute="value" ... /> </adapter>

Where:
3-8 AIS User Guide and Reference

attribute: The name of the configuration property. Attributes are adapter-dependent. For example, the preloaded attribute is set with an event router adapter or event to initiate the adapter as soon as the Daemon is started.
Note:

The pre-loaded attribute can also be set when the adapter definition is large, so that the time allocated to a server when the User opens a connection is shortened since the definition has been pre-loaded.

value: The value of the configuration property.

<datasources> Statement
This statement lists the accessible data sources using the <datasource> statement.

<datasource> Statement
A <datasource> statement specifies the name and type of the data source and the information required to connect to the data source. The basic format is as follows:
<datasource name="name" type="type" attribute="value"> <config .../> </datasource>

Where:

name: The name of the data source that is recognized by AIS. The maximum length is 32 characters. The name cannot include hyphens (-). It can include underscores (_). This name cannot be the name of a machine specified in a <remoteMachines> statement. type: The type of the data source to be accessed. This value is different for each data source driver. Refer to a specific data source driver for the value of this parameter. The value of this field when you define a data source shortcut (where the data source resides on another machine) is REMOTE. attribute: General data source attributes, such as read only. These attributes are set in Attunity Studio on the Advanced tab for the data source.
Note:

The localCopy and noExtendedMetadata attributes are set automatically on the Metadata tab of the Design perspective when changes are made to native metadata. For more details, see Native Metadata Caching and Extended Native Data Source Metadata.

The following additional attributes are supported:


Table 31 AIS Studio Data source Advanced Tab Transaction Type Data Source Supported Attributes Description The Transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level.

Attribute transactionType="trnLevel Support | datasource Default

Binding Configuration 3-9

Table 31 (Cont.) Data Source Supported Attributes AIS Studio Data source Advanced Tab Syntax name For further details about this field, see Using the Attunity Connect Syntax File (NAV.SYN). Attribute syntaxName="value" Description A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner Read/Write Information Repository directory Repository name

owner="value" readOnly="true|false" The default value is false. objectStoreDir="value" objectStoreName="value"

The name of the table owner that is used if an owner is not indicated in the SQL. true: The data source is in read-only mode. All update and data definition language (DDL) operations are blocked. Indicates the location of a repository for a data source. The name of a repository for a specific data source. The name is defined as a data source in the binding configuration with a type of Virtual and is used to store AIS views and stored procedures specific to the data source, when this is wanted in preference to the default SYS data.

Example 32 <datasources> statement <datasources name="NAV"> <datasource name="ADABAS" type="ADABAS"> <config dbNumber="3" predictFileNumber="7"/> <datasource name="DB2" type="DB2"> <config dbname="person2"/> </datasource> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="DISAM" type="ADD-DISAM"> 3-10 AIS User Guide and Reference

<config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="SYBASE" type="SYBASE"> <config server="SYB11_HP" dbName="personnel"/> </datasource> </datasources>

Note:

On the HP NonStop platform, the Repository Information objectStoreDir and objectStoreName attributes do not affect Alternate Key Files for the following data sources: Enscribe SQL/MP if a local copy or extended metadata is used.

These are always created in the NAVROOT subvolume with uniquely generated filenames.

<config> Statement
This statement specifies configuration properties of a data source. The configuration information is specific to each adapter type. The basic format is as follows:
<datasource name="name" type="type"> <config attribute="value" attribute="value" ... /> </datasource>

Where:

attribute: The name of the configuration property. value: The value of the configuration property.

Example 33 <config> statement <datasources> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> </datasources>

Sample Binding
This section shows a sample XML binding in XML format. You can also view the binding.bnd XML file in the XML editor. To open this file in the editor:

Right-click on the binding you want to view and select Open as XML.

This displays a graphical interface where you can define the various aspects of a solution. This interface lets you make changes easily without having to manually edit the XML file. For more information, see Editing XML Files in Attunity Studio. If you want to view the XML in its original format, click the Source tab after you open the binding in XML.

Binding Configuration 3-11

The following Binding configuration provides information for the NAVDEMO sample data source and for a local data source (an ORA_EXT Oracle database), one remote data source, and one adapter ( "MathLegacy", which uses the AIS Legacy Plug adapter):
<?xml version="1.0" encoding="ISO-8859-1"?> <navobj version="..."> <bindings> <binding name="NAV"> <remoteMachines> <remoteMachine name="SUN_ACME_COM" address="sun.acme.com" workspace="PROD"/> </remoteMachines> <environment name="NAV"> <debug generalTrace="true"/> <misc/> <queryProcessor/> <optimizer goal="none" preferredSite="server"/> <transactions/> <odbc/> <oledb/> <tuning/> </environment> <datasources name="NAV"> <datasource name="NAVDEMO" type="ADD-DISAM"> <config newFileLocation="$NAVDEMO"/> </datasource> <datasource name="ORA_EXT" type="ORACLE8" connect="@ora8_ntdb"/> <datasource name="ORA" type="remote" connect="sun_acme_com"/> </datasources> <adapters name="NAV"> <adapter name="MathLegacy" type="LegacyPlug"> <config dllName="c:\legacy\prc_samples.dll"/> </adapter> </adapters> </binding> </bindings> </navobj>

Environment Properties
Each Binding configuration includes its own environment, specified in the environment properties.
Note:

When using an ADO front-end application, the environment used is the environment of the first binding configuration used in the program, even if the binding configuration used is changed during the program.

To display environment properties for the binding configuration in Attunity Studio, right-click the binding configuration and select Open. The environment properties are listed in the Environment tab. The following sections describe each category in the Environment Properties editor.

3-12 AIS User Guide and Reference

Debug General Language Modeling ODBC OLE DB Optimizer Query Processor Temp Features Transaction Tuning XML

Debug
This following list shows the debug properties. The debug properties control what information is reported for debugging purposes.

ACX trace: Select this to write the input XML sent to the adapter and the output XML returned by the adapter to the log file. GDB trace: Select this to write the driver transactions created using the AIS SDK to the log. For details refer to Attunity Developer SDK. General trace: Select this to write the general trace information used by to the log. The default writes only error messages to the log. Note: Changing the default setting can degrade AIS performance.

Query warnings: Select this to generate a log file of Query Processor warnings. Add timestamp to traced events: Select this to add a timestamp on each event row in the log. Trigger trace: Select this to log trigger information each time that a database executes a trigger. Adapter trace: Control trace: Query processor trace: Performance trace:

Binary XML log level: Select the binary XML log level from the list. The following logging levels are available:

None API Info Debug

This parameter is used for troubleshooting.

Binding Configuration 3-13

Log file: Enter the full path and filename of the log file for messages. The default log file (NAV.LOG) is located in the TMP directory under the directory where AIS Server is installed. To send log messages to the console, instead of a file, set logFile to a minus, "-", character. The following message types of are written to the log:

Error messages. Trace information about the query optimization strategy (when General trace is selected).

For HP NonStop, the default AIS log file is called NAVLOG and it is located in the subvolume where the AIS Server is installed. If the log file is location is described by a UNIX type path (such as /G/d0117/ac3300/navlog), then the log file can be viewed from other processes (while it is open). Otherwise, the log is not readable while it is open. To view the file use the following:
FUP COPY filename,, SHARE

For z/OS, the default AIS log file is NAVROOT.DEF.NAVLOG, where NAVROOT is the high level qualifier specified when AIS Server is installed.

Trace directory: Enter the directory where AIS writes the log generated by the optimizer files (with a PLN extension). The optimizer files include details of the optimization strategy used by AIS. By default, these files are written to the same directory as the log file (see Log file). Transaction log file: Enter the full path and filename of the log file that logs transaction activity. This log file is used during recovery operations. On Windows platforms, the default log file (TRLOG.TLF) is written to the same directory as the NAV.LOG file (which is defined by the debug logFile parameter). It is recommended to use the default log file and perform recovery from a PC. Transaction trace: Select this to write 2PC and XA transactions-related events to the log. Optimizer trace: Select this to write trace information and information about the Query Optimizer strategy to the log file. If this property is selected, the following properties are also enabled. Full trace: Select this to enable all optimizer traces. Trace groups: If using Full trace, you can select this to enable generated groups optimizer traces. Trace groups is unavailable if Full trace is not selected.

Transaction extended logging: Select this for the transaction manager to write additional information about transactions to the log.

General
The following list shows the general properties. The general properties control general configuration properties for the binding.

Compress object store: Select this to allow compressing objects in the repository if they use more than 2K storage. This property is automatically selected for a bindings created for a CDC Agent with a Staging Area.

3-14 AIS User Guide and Reference

Read V3 definition: Select this if you are upgrading AIS from version 3.xx. This property is selected by default. If you do not need to use this behavior, clear the check box. Temporary directory: Enter the path to the directory where temporary files are written, including the temporary files created for use by hash joins and for sorting files. The default is the current directory. The following describes how to determine where your temporary directory should reside:
1.

Select a directory that contains temporary files only. You can then easily remove these files if necessary (for example, if the process stopped in the middle). Select a directory on a disk that has a significant amount of free disk space.

2.

Year 2000 policy: This property defines how two-digit years are converted into four-digit years. Enter a numeric value in this field. Two policies can be used:

Fixed Base Year: If this property is set to a value greater than, or equal to 1900, the Fixed Base Year policy is used. In this case, the property value is the first four-digit year after 1900 that can be represented by a two-digits. For example, if the value is set to 1905, the years 2000->2004 will be represented by 00->04. All other two digits will map to 19xx. Sliding Base Year: If this property is set to a positive value less than 100, the Sliding Base Year policy is used. In this case, the property value is the number of years ahead of the current year that can be represented by a two-digit number. With each passing year the earliest year that can be represented by a two-digit number changes to a year later. When the parameter is not set, or when it is set to a value outside the range of values defined for the above policies, the default value of 5 and the Sliding Base Year policy is used.

NAV_UTIL editor: Enter the text editor to use when you use NAV_UTIL EDIT. The default is the native text editor for the operating system. Journal file name: Enter the full path to the journal file (including the file name) for use with CDC on DISAM. The default journal file is located in the DEF directory of the AIS installation. Cache buffer size: Enter the number of bytes to be used for a memory buffer on a client machine, which is used by the AIS client/server to store read-ahead data. The default is 200000.

Language
The Language section lets you set the default language for applications in the binding. To set the default language for the binding From the Language list, select the National Language Support (NLS) supported language to use in this binding. Valid values are:

ARA (Arabic) ENG (English) FR (French) GER (German)

Binding Configuration 3-15

GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)

From the Codepage list, select the codepage that you want to use with this language. The code pages available are determined by the Language that is selected. If you have additional code pages available, you can manually enter them in this field. Note: If you change the language, the code page will also change. Check to be sure that you want to use the selected code page with the language you selected. If no codepage is selected, the default codepage for the selected language is used.

From the NLS string list, select the NLS string for this language and code page. The NLS strings available are determined by the code page that is selected. If you have additional NLS strings available, you can manually enter them in this field. The codepage is used by a field with a data type defined as nlsString. This parameter is used for a field with a codepage that is different than the machines codepage. This property includes the following values:

The name of the codepage. Whether the character set reads from right to left (as in middle-eastern character sets).

For example, the following specifies a Japanese EUC 16-bit codepage:


<misc nlsString="JA16EUC,false"/>

For more information, see NLS Support at the Field Level.

Modeling
The modeling section lets you define how to handle nonrelational data and arrays. For more information, see Handling Arrays.

Array metadata model: Select the virtual array flattening model for the binding. You can select one of the models below: virtualarrayTables: In this model, a virtual table is generated for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. This is the default for all cases except CDC. virtualarrayViews: In this model, the parent field is replaced with a unique key. Virtual views use the same metadata as virtual tables. This is the default model for CDC.

Reduce sequential flattening: Select this for sequentially flattened tables to not return a row that lists only the parent record, without the values of the child array columns. This option is available only if you select virtualarrayViews as your Array metadata model.

3-16 AIS User Guide and Reference

Reduce virtual views: Select this for virtual views to not return a row that lists only the parent record, without the values of the child array columns. By default the property is selected. Clear the check box to return the row. This option is available only if you select virtualarrayViews as your Array metadata model.

Generate unique index names: Select this to generate a unique name for every index on a table that is defined in a non-relational system. When this is selected, the names of all indexes on non-relational tables are exposed in the following format: table_name_KEYkey_number. For example, if a table is called X, the name of the first index would be X_KEY0.

ODBC
The following list shows the properties for ODBC. The ODBC properties set the parameters used when using ODBC to work with AIS.

Maximum active connections: Enter the maximum number of connections that an ODBC or OLE DB application can make through AIS. The default is 0. The greater the number of connections possible, the faster the application can run. However, other applications will run slower and each connection is counted as a license, restricting the total number of Users who can access data through AIS concurrently. For example, this is true when using MS Access as a front-end, because MS Access allocates more than one connection whenever possible.

Maximum active statements: Enter the value returned for the InfoType of the ODBC SQLGetInfo API. The default (0) means that there is no limit on the number of active statements. Force qualify tables: Select this to report the catalog and table name together as a single string (in this format: DS:table_name).

OLE DB
The following list shows the properties for OLE DB. The OLE DB properties set the parameters used when using OLE DB to work with AIS.

Trace: Select this to write the trace information used when working with OLE DB providers to the log. If not selected, only error messages are written to the log. Note: Changing the default setting can degrade AIS performance.

Suppress chapters: Select this to suppress chapters in ODBC. In this case, Chapters are exposed as regular columns. Use this property to prevent an SQL server error for a bad datatype. Maximum row handles: Enter the maximum number of hrows (row handles) that can reside in memory at one time under OLE DB. This property is related to the ADO CacheSize property. Set both of these properties to the same value. If the two properties are different, the smaller value is used and the other is ignored. OLE threads: Enter the number of open threads allowed when working with OLE transactions. These threads are used for operations received from the MSDTC. The minimum value is 5. the optimum value is 15. The maximum value is 25.

Binding Configuration 3-17

Optimizer
This following list shows the optimizer properties. The optimizer properties control how the query optimizer works in the binding.

Avoid scan: Select this to force the optimizer not to choose the scan strategy, if a different strategy can be used. Disable multi-index: Select this to disable the multi-index storage. For Adabas only. Disable cache without index: Select this to disable non-index caches. Disable flattener: Select this to instruct the query optimizer not to flatten queries including nested queries. Disable hash join: Select this to disable hash join optimization. When hash joins are enabled, a significant amount of disk space is required (see Hash maximum disk space). If the system does not have available disk space, use this option to disable hash join optimization. Disable index cache: Select this to disable index caching. Disable lookup cache: Select this to disable the lookup cache. Encourage lookup cache: Select this to define the optimizer to ignore the cache buffer size restriction (hashBufferSize) on a group consisting of a single table.

Disable pass thru: Select this to force the optimizer to execute a full optimization for the query even if it can be delegated to the relational backend database as is. Disable Tdp union: Select this to define the optimizer to handle separate Data sources as if they are on a different backend even if they are part of the same database or remote machine. Disable subquery cache: Select this to disable the cache for subqueries. Analyzer query plan: Select this to write the Query Optimizer plan to a plan file for analysis using the AIS Query Analyzer. Optimization goal: Select the optimization policy to use from the list. The following policies are available:

none: All row optimization is used. This is the default value. first: First row optimization is performed based on the assumption that the results produced by the query are used as the rows are retrieved. The query optimizer uses a strategy that retrieves the first rows as fast as possible, which might result in a slower overall time to retrieve all the rows. all: Optimization is performed based on the assumption that the results produced by the query are used after all the rows have been retrieved. The query optimizer uses a strategy that retrieves all the rows as fast as possible, which might result in a slower time to retrieve the first few rows.

Note: Aggregate queries automatically use all row optimization, regardless of the value of this parameter.

Hash maximum disk space: Enter the maximum amount of disk space (in MBs) that a query can use for hash joins. The default is -1 (which indicates unlimited, all the free space on the allocated disk). If a query requires more space than allocated via this parameter, the query execution will stop. The minimum value for this parameter is 20 MB.

3-18 AIS User Guide and Reference

Note: Temporary files are written per query. Therefore, if several users can execute queries at the same time, adjust the amount of space available, so that the total that can be allocated at any one time does not exceed the available space. HP NonStop: If AIS files reduced to disk are larger than 500 MB, use this parameter to enlarge the default size of the file that AIS opens. The default for this parameter on HP NonStop machines is 478.3MB, which consists of a primary extent size of 20K, a secondary extent size of 1000, and a maximum of 500 extents.

Preferred site: Select the machine where you want to process the query. Normally the query is processed as close to the data source as possible (either using the query processing of the data source, or if this is not available, the Query Processor on the same machine as the data source). If a situation arises in which it is more efficient to process the query on the Client Machine (for example, when the remote machine is heavily overloaded), you can tune AIS to process all or part of the query locally. The extent that performance is improved by processing all or some of the query locally can be determined only on a trial and error basis. Consider the following points when processing the query locally:

Increased communication costs. Decreased server workload.

Before adjusting this parameter, check the log to see if other tuning is more appropriate. The options are:

server (the default): The query is processed on the server. nearServer: The query is processed mostly on the server with parts of the query processed on the client (determined by the specific query). nearClient: The query is processed mostly on the client with parts of the query processed on the server (determined by the specific query). client: The query is processed on the client.

Maximum tables in group: Default row cardinality: Enter the default row cardinality. If you want to set to a value other than 0, this value is used by the Optimizer as the default rows number for pass-through queries and for tables that have no statistical information. Maximum groups for reorder: No LOJ delegation: Select this if you do not want to delegate LEFT OUTER JOIN queries to the relation backend database. AIS will attempt to suppress LOJ queries. If not selected, every query can be delegated as is. Use recursive LOJ allocation: Select this to allow recursive optimization to queries including. By default, this is selected. If you want to disable the recursive LOJ delegation, clear the check box. LOJ recursive optimization limit: Value only if using the log allocation to true.

Disable semi-join: Select this to disable semi-join optimization. Semi-join in values factor: Enter the number of parameters that a semi-join strategy sends to an RDBMS.

Disable order -by-index strategy: Select to disable the order-by-index strategy. In this strategy the order of the results is achieved by accessing the table by the index that contains its segments in the ORDER BY clause so that QP does not sort the results. If not selected, this strategy is not used and sorting is done by QP. Note
Binding Configuration 3-19

that this strategy is rarely selected and using this property affects only queries with an ORDER BY and with no WHERE clause on indexes. All other cases will use the same strategies as they do when this property is not selected.

Parallel Processing
This following list shows the parallel processing properties. The parallel processing properties control how parallel processes are handled in the binding.

Disable threads: Select this to disable multi-threading. If this is selected, the following properties are disabled. Disable threaded read ahead (QP): Select this to disable read-ahead functionality. Disable query read ahead (QP): Select this to disable read-ahead functionality for components using Query Processor services. ODBC async execution property to enable ODBC asynchronous execution Disable QP parallel execution: Select this to disable parallel processing for query execution. This option is available only if both Disable threaded read ahead (QP) and Disable query ready ahead (QP) are not selected. Hash parallelism: Select this to read both sides of hash joins at the same time. By default, this property is selected. If you do not want this behavior, clear the check box.

Query Processor
This following list shows the query processor properties. The query processor properties control how the query processor processes requests in the binding.

Disable command reuse: Select this to disable Query Processor caching the executed state of a query for reuse. Disable DS property cache: Select this to disable caching data source properties. Disable insert parameterization: Select this to disable parameterization constants in INSERT statements. Disable metadata caching: Select this to disable caching object metadata. If this is selected, the object metadata is taken from the from the original data source instead of the cache. Disable query parameterization: Select this to not convert constants into parameters when accessing data sources. Disable row mark field fetch: Select this for OLE DB getRows errors to be marked and reshown on every getRows, if the rowset is active. Compile after load: Select this to always compiles an AIS procedure or view after it is read. Ignore segments bind failure: This property determines how AIS responds when the execution of one of the segments of a segmented data source fails:

Select this to Log a message and continue execution. This is the default setting. Clear the check box to Log a message and stop execution. By default, this property is selected. If you want to stop execution after sending a message, clear this check box.

3-20 AIS User Guide and Reference

Prompt database-user password: Select this to configure AIS to prompt the user for security information when accessing a data source. Use alternate qualifier: Select this to use the @ symbol instead of a colon (:) when connecting to multiple data sources. Note: You should use this value when building an application using PowerBuilder from Sybase Inc. or Genio from Hummingbird Ltd.

Use table filter expression: Select this to enable the use of tables that have filter expressions specified in their metadata. For details of filters in ADD, see The <table> Statement for information on the filter property or see the Metadata General Tab for information on using the Filter expression in Attunity Studio. Write empty string as null: Select this to replace empty strings in a SET clause of an UPDATE statement or in a VALUES list of an INSERT statement with null values. Optimistic for update: Select this to use optimistic locking as the default locking behavior on queries with a FOR UPDATE clause. Disable compilation cache: Select this to disable saving successfully compiled statements in the cache. Maximum SQL cache: Enter the maximum number of SQL queries that can be stored in cache memory. This propertys value is ignored if Disable compilation cache is selected. The default is 3. First tree extensions: Enter the maximum size allowed for an SQL query after compilation. The default is 150. Maximum columns in parsing: Enter the maximum number of columns that a query can reference. The default is 500. Maximum segmented database threads: Enter the maximum number of open threads allowed, when working with segmented databases. Minimum number of parameters allocated: Enter the minimum number of parameters that can be used in a query. Continuous query retry interval: Enter the number of seconds that the query processor waits before executing a query again, when no records are returned. The default is 2. Continuous query timeout: Enter the number of seconds that the query processor will continue to issue queries, when no records are returned. The default is 3600 (one hour), which indicates that after an hour without new messages the continuous query will end. Enter 0 to indicate that there is no timeout and the continuos query will not end automatically. Continuous query prefix: Enter a prefix to replace the $$ prefix that is used to identify the continuous query special columns. For example, if you enter ##, then the continuous query alias is ##StreamPosition and the control command alias is ##ControlCommand. Arithmetic fixed precision: Enter an integer determine the precision scale factor for floating decimal position. The default is 0, which indicates that the exact arithmetic function is not used. When the value is set to a small positive integer, the special precise floating point arithmetic is used in the query processor. The value determines the precision scale factor (for example, a value of 2 indicates two digits decimal precision). Setting this parameter can be done at a workspace level and it affects all queries running

Binding Configuration 3-21

at that workspace with no change to the query or to the underlying data source. The query processor ADD(), SUBTRACT() and SUM() functions that currently use double arithmetic for both floating and decimal types will use this logic. When the value is set to the default, 0, the exact arithmetic function is not used. This property is used to set the Exact Arithmetic function. The qpArithmeticFixedPrecision property is an integer value that determines the fixed precision the AIS query processor uses for precise floating point arithmetic. It is used to create an accurate result when using the SUM function. Because floating point datatypes are not accurate their results over time does not correspond to the expected arithmetic sum. In other words, in the floating point representation, values such as 0.7 cannot be represented precisely. If there are eight precision digits, there is usually imprecision in the least significant digit so the number is actually approximately 0.699999995. The qpArithmeticFixedPrecision property corrects this imprecision by using an exact floating point.

Parser depth: Enter the maximum depth of the expression tree. The default is 500. Token size: Enter the maximum length of a string in an SQL query. The minimum value is 64. the default is 350. Insert from select commit rate: Enter the commit rate to use when executing an INSERT-FROM-SELECT operation. If a value more than 0 is entered, a commit is performed automatically after inserting the indicated number of rows. For example, if the value is 5,a commit is performed every time 5 rows are inserted. Disable SQS cache: Select this to always read compiled AIS procedures and views from a disk. In this case, they are not saved in the cache. Procedures cache size: Enter the number of AIS stored queries created with a CREATE PROCEDURE statement that can be kept in cache memory. This propertys value is ignored if Disable SQS cache size is selected. Expose XML fields: Select this to display data returned for a query as XML, representing the true structure of the result. This is useful when querying a data source table that contains arrays or variants. For additional information, see the SELECT XML Statement. XML field name: Enter the name used in a query to indicate that the data is returned as XML, instead of the keyword XML. This is available only if Expose XML fields is selected.

Temp Features
The temp features section lets you add temporary properties to the binding. These properties may be defined in the AIS documentation or release notes, or you can define any additional property. The temporary property is added to the binding.bnd file, which defines the binding environment. To set a temporary feature Enter an ID for the feature. This is the name given to the Environmental property that configures the feature.

Enter the Value to use for the feature.

3-22 AIS User Guide and Reference

Transaction
This following list shows the transaction properties. The transaction properties control how transactions are handled in the binding.

Commit on destroy: Select this to commit all single-phase commit transactions opened for a data source, if a connection closes while the transaction is still open. Disable 2PC: Select this to disable Two-phase Commit capabilities, even in drivers that support two-phase commit. User commit confirm table: Select this to use the commit-confirm table for data sources that support single-phase commit. Recovery delay: Enter the number of minutes from the start of a transaction before any recovery operation on that transaction can be attempted. The default is 15. Time limit: Enter the time to wait for a transaction to complete before an error is returned. This parameter is also used when executing a RECOVERY, and it then indicates the number of minutes from the last transaction activity to wait before a forced activity can be executed.

Conversions: You can select one of the following: No conversion: Select this if you want all transactions to remain as sent. This is selected by default. Convert all to distributed: Select this to convert all simple Transaction Managers into distributed transactions. Convert all to simple: Select this to convert all distributed transactions into simple transactions.

Tuning
This following list shows the tuning properties. The tuning properties are set to increase system efficiency in the binding.

Dsm maximum buffer size: Enter the maximum size of a cache memory. This cache is used when memory is required on a temporary basis (as when AIS sorts data for a query output, for a subquery, or for aggregate queries). This cache size is not used for hash joins and lookup joins (see also, Hash buffer size). The default is 1000000. Dsm maximum hash file size: Dsm maximum sort buffer size: Enter the maximum size of the sort buffers. Use this parameter instead of Dsm maximum buffer size for sorts only. The default is 1000000. Dsm middle buffer size: Enter the maximum number of bytes for the index cache. This cache is not used for hash joins and lookup joins. The default is 1000000. File pool size: Enter the maximum number of files that can be opened in the file pool. The default is 10. File pool size per file: Enter the size of the file in the pool. The default is 3. File close on transaction: Select this if you want the File Pool to close when a transaction is committed. Use global file pool: Select this to use a global file pool is. When the workspace server mode parameter is set to multiClient or reusable, this parameter also
Binding Configuration 3-23

indicates whether the file pool closes upon the client disconnection. See Server Mode.

Hash buffer size: Enter the number of bytes of cache memory that is available for each hash join or lookup join. The default is 1000000. Hash max open files: Enter the maximum number of files that a query can open at one time for use when performing hash joins. The number assigned to this parameter must not exceed the system maximum. The default is 90. Note: The hash join optimization strategy results in a number of files being opened to perform the join. The larger the table size, the more files are opened. By adjusting this parameter you can disable hash joins on very large tables, while allowing hash joins for small tables. (See Disable hash join, for information on disabling hash optimization for all table joins).

Hash primary extent size: Enter the primary extent size (MVS and Tandem only). Hash secondary extent size: Enter the secondary extent size (MVS and Tandem only). Hash max extents: Enter the maximum extent size (Tandem only). Hash enable RO: Select this for the QP to store the first hash bucket in memory instead of a sequential file. Hash refresh EoF: Select this to refresh EoF requests (Tandem only).

XML
This following list shows the XML properties. The XML properties control how XML files are handled in the binding.

COM maximum XML in memory: The maximum size, in bytes, for an XML document held in memory. The default is 524288. COM maximum XML size: The maximum size of an XML document passed to another machine. The default is 524288. Note: When you increase this value for this property, you may need to increase the value for the maxXmlSize property in the daemon. For more information on daemons, see Setting up Daemons.

COM XML transport buffer size: Enter the maximum size of the internal communications buffer. The default value (-1) indicates there is no size limit. XML date format: Enter the date format to use for XML. The options are:

ISO (the default): The date format is: YY-MM-DDThh:mm:ss[.ss..] ODBC: The date format is: YYYY-MM-DD HH:MM:SS[.NNN...]

Replace invalid XML characters: Select this to replace invalid XML characters with a ?. It is used for diagnostic and troubleshooting purposes. XML trim character columns: Select this to enable padded spaces to be trimmed from XML string columns when the record format of is fixed. By default this is selected, and padded spaces are trimmed for fixed size character columns. If you do not want this behavior, clear this check box.

3-24 AIS User Guide and Reference

Table 32 Category debug

Environment Properties Default Value Description .

Parameter Name

adminTrace basedDate This property is used to customize the based_date data type having the form: basedDate="yyyymmdd[/dttype[/dtl en[/multiplier]]]" where:

yyyymmdd: Start date dttype: The name of the data type (int4 is the default). dtlen: The number of digits in the data type (if not atomic). multiplier: The number of increments per day.

Example: 19700101/int4//24 indicates the number of hours since January 1, 1970. basedDateNullability true If true, when the based_date value is 0 it is considered Null. If false, the type is not nullable.

Languages
This section provides additional, detailed information regarding each language supported. The languages supported are:

ARA (Arabic) ENG (English) FR (French) GER (German) GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)

ARA (Arabic)
If the codepage parameter is blank, the default codepage on all supported platforms is AR8ISO8859P6, with the exception of HP NonStop platforms, where the default codepage is ARCII. The Windows codepage is 1256.

Binding Configuration 3-25

ENG (English)
The default. If the codepage parameter is blank, the default codepage on all supported platforms is ASCII, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is EBCDIC. The Windows codepage is 1252.

FR (French)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.

GER (German)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.

GREEK (Greek)
If the codepage parameter is blank, the default codepage on all supported platforms is WEISO8859P7, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC875. The Windows codepage is 1253.

HEB (Hebrew)
If the codepage parameter is blank, the default codepage on all supported platforms is IW8ISO8859P8, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is IW8EBCDIC424. The Windows codepage is 1255.

JPN (Japanese)
If the codepage parameter is blank, the following are the default codepages on the supported platforms:
Platform HP NonStop IBM z/OS IBM OS/400 OpenVMS UNIX, excluding Sun Solaris UNIX Sun Solaris Windows Default Japanese Codepage JA16SJIS JA16DBCS JA16DBCS JA16VMS JA16SJIS JA16EUC JA16SJIS

KOR (Korean)
If the codepage parameter is blank, the default codepage on all supported platforms is KO16OSC5601, with the exception Windows platforms, where the default codepage is KO16MS949 (949) and IBM OS/400 and z/OS platforms, where the default codepage is KO16DBCS.

SCHI (Simple Chinese)


If the codepage parameter is blank, the default codepage on all supported platforms is ZHS16CGBK231280, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is ZHS16DBCS. Form EBCDIC, IBM 1388.
3-26 AIS User Guide and Reference

SPA (Spanish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDICLATIN. The Windows codepage is 1252.

TCHI (Traditional Chinese)


If the codepage parameter is blank, the default codepage on all supported platforms is ZHT16BIG5, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is ZHT16DBCS.

TUR (Turkish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P9 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDIC1026. The Windows codepage is 1254.

Sample Environment Properties


The following sample shows how different environment properties are represented in XML for the NAV binding configuration:
<environment name="NAV"> <comm comCacheBufferSize="200000" /> <debug logFile="" traceDir="" /> <misc tempDir="" language="" codepage="" nlsString="" /> <odbc maxActiveConnections="0" /> <oledb maxHRows="100" /> <optimizer preferredSite="server" /> <queryProcessor proceduresCacheSize="3" firstTreeExtensions="150" maxColumnsInParsing="500" /> <transactions/> <tuning dsmMaxBufferSize="1000000" dsmMidBufferSize="100000" hashBufferSize="1000000" hashMaxDiskSpace="-1" hashMaxOpenFiles="90" /> </environment>

Note:

The XML representation of the environment properties are displayed in Attunity Studio in the XML editor. To view the XML, right-click the binding you are working with and select Open as XML.

Binding Configuration 3-27

3-28 AIS User Guide and Reference

4
Setting up Daemons
This section contains the following topics:

Daemons Defining Daemons at Design Time Reloading Daemon Configurations at Runtime Checking the Daemon Status Starting and Stopping Daemons Adding and Editing Workspaces

Daemons
Daemons manage communication between machines running AIS. The daemon is responsible for allocating Attunity server processes to clients. A daemon runs on every machine running AIS. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace, and provides the clients with the required servers. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process.
Note:

The configuration supplied with the product installation includes the default IRPCD daemon. This configuration is used when no other daemon is configured to access the machine that is requested by a client machine.

For more information, see AIS Runtime Tasks from the Command Line.

Defining Daemons at Design Time


This section describes how to configure and edit a daemon in the Attunity Studio Design perspective. You can also configure Daemons and workspaces in the Runtime Manager perspective. For more information on working with daemons and workspaces in the Runtime Manager perspective see Runtime Management with Attunity Studio.

Setting up Daemons

4-1

Adding a Daemon
The following describes how to add a new daemon to your system using Attunity Studio. When you want to add a new daemon in the Design perspective, you use standard configuration information or copy the configuration information from another daemon. When you edit the Daemon, you can make custom changes to its configuration. To add a new daemon Open Attunity Studio. In the Design Perspective, Configuration view, expand the Machines folder and then expand the machine where you want to add the daemon. Right-click the Daemons folder and select New Daemon. The New Daemon dialog box opens.

1. 2. 3.

Figure 41 New Daemon

4. 5.

Enter a name for the new daemon. Select one of the following:

Create empty daemon with default values Copy Properties from another daemon If you choose to copy the properties of an existing daemon, click browse and select the daemon where you want to copy the properties.

6.

Click Finish. The Daemon editor opens on the right of the workbench. This editor contains three tabs, which are described in the Editing a Daemon section below.

4-2 AIS User Guide and Reference

Notes:

A machine can have a more than one daemon running at the same time, each on its own port. You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions. The daemon editor may contain four additional tabs for workspace information. To display both the daemon and workspace configuration, right-click a workspace under the daemon and select Edit Workspace For a description of these tabs, see Adding and Editing Workspaces.

Editing a Daemon
You can edit the information in the following Daemon editor tabs:

Control: In this tab you enter general details about the server timeout parameters and monitoring rules. Logging: In this tab you enter the logging details such as, the log file format and location, and the parameters to log and trace. Security: In this tab you enter the daemons administrative privileges and access privileges.

To open the daemon editor 1. In the Design Perspective Configuration view expand the Machines folder and then expand the machine where you want to add the daemon.
2. 3.

Expand the daemon folder. Right-click the daemon you want to edit and select Open. The Daemon editor opens on the right of the workbench. Click each tab to edit the information. The tab fields are described below.
Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

Control
The Control tab for the daemon lets you define general daemon control properties. The following figure shows the Daemon control tab:

Setting up Daemons

4-3

Figure 42 Daemon Control Tab

The following table describes the fields in this tab.


Table 41 Field General Daemon IP address Enter the IP address of the machine(s) where the daemon is listening. If no IP address is entered, the daemon will listen on all available IP addresses. Enter the port where the daemon is listening. If no port is entered, the daemon listens on all available ports. This determines the range of ports available for this daemon when starting server processes. Enter the port range in the following fields:

Daemon Control tab Description

Daemon port Port range for servers

From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range

Automatically recover from failure

The daemon restarts automatically if it fails for any reason (any error that causes the daemon process to terminate, such as network process lost or the CPU running the daemon crashes and the backup daemon is defined on another CPU). All available and unconnected servers are terminated and any connected servers are marked and terminated on release. Also the backup starts a backup for itself. The backup appends a new log file to the log of the original daemon, adding a line indicating that a backup daemon was started. The language that the daemon supports. This setting is used when working with a client with a codepage different from the server codepage. See Basic NLS Settings.

Default language

4-4 AIS User Guide and Reference

Table 41 (Cont.) Daemon Control tab Field Maximum XML request size Maximum XML in memory Timeout Parameters Call timeout The timeout period for short calls for all daemons. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in this field for the daemon configuration/workspace. Connect timeout The time the client waits for a daemon server to start. If the daemon server does not start within this period, then the client is notified that the server did not respond. The value specified for this parameter serves as the default timeout for all the workspaces listed in the daemon configuration. The default value for this parameter is 60 seconds. Notes:

Description The maximum number of bytes that the daemon handles for an XML document. The maximum amount of space reserved for the XML in memory.

Entering the timeout in a workspace overrides the value set in this field for that workspace. Even if the XML source does not list this parameter in the workspace section, the workspace gets it using the default value. If you want to prevent a workspace from using the default value, you must enter a value of zero for this parameter in the workspace section.

Client idle timeout

The maximum amount of time any daemon client may be idle before the connection with the server is closed. Note: Entering the timeout in a Workspace overrides this setting for that workspace.

Logging
You can set up daemon logging for the following:

Daemon log files Workspace server process log files

The daemon log records daemon operations, such as RPC calls and error messages. You can select the type of information you want included in the log file
Note:

This section does not apply to z/OS platforms, where logging information is written directly to the process.

The following figure shows the Logging tab:

Setting up Daemons

4-5

Figure 43 The Daemon Logging Tab

In this tab, define the daemon log file settings, the log file structure and the location where the log is saved in the Daemon Logging tab. You can also define the data that is logged and traced in the file.
Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

The following table describes the fields in the Daemon Logging tab.
Table 42 Field Logging options Daemon log file location Enter how the daemon produces its log data. The full path must be specified. You can use wildcards as part of this file to indicate specific information. Daemon Logging Tab Description

4-6 AIS User Guide and Reference

Table 42 (Cont.) Daemon Logging Tab Field Server log filename format Description Defines the name and location of the server log file. The field must specify the full path name. If no directory information is provided for the log file, then it will be located in the login directory of the account running an AIS workstation. You can enter the following wildcards in this field to generate the following information:

%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)

Daemon operations Trace and debug options Daemon RPC function calls Log ACX Extended RPC trace

Select this if you want to log the daemon operations.

Select this if you want to log all daemon RPC function calls. Select this if you want to log requests and processes. Generates a verbose message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. Generates system-specific tracing of various operations. Generates a timestamp for every entry to the server log file. Generates a message in the server log file for each socket operation. Select this if you want to log low-level RPC operations. Disables the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Generates a message in the server log file for each RPC function called. This is useful for troubleshooting the server. Enables debugging messages on the server. Sets the binary XML log level. Your options are:

System trace Timing Sockets Trace information No timeout Call trace RPC trace Binary XML log level

debug none (the default) api info

Setting up Daemons

4-7

Notes:

AIS supports a subset of UNIX commands that enables file manipulation using standard UNIX commands for the AS/400. Log files are stored under the UNIX file system section of the AS/400 machine. The following types of files are stored under the native OS/400 file system:

Executables Service programs

When manipulating configuration files or log files, wrap the path/filename in single quotes (), to ensure that the slash (/) used in the UNIX file system syntax is handled correctly. (The slash is an OS/400 special character.)

Security
The Security tab for daemons is used to:

Grant administration rights for the daemon. Determine access to the computer.

The following figure shows the Daemon Security tab:


Figure 44 The Daemon Security Tab

4-8 AIS User Guide and Reference

Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

The following table describes the fields in the Daemon Security tab:
Table 43 Field Administrators privileges All users Daemon Security Tab Description Identifies the users (accounts) allowed to perform administrative tasks (tasks that require administrative login). Enables all users to access the daemon and change the settings. When this is selected, add the names of users (accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is not in the list, any user who has logged on to the daemon can administer the workspace Selected users only Identifies the names of users (accounts) and groups that can be administrators.1 If a user is not specified, the account from which the daemon was started is the administrator account. The daemon does not require the user to log in to the account on the system, but to log in to the daemon using the account name and password. Machine access Allow anonymous login Manages access to the computer. Indicates whether workspaces allow anonymous logins (without user name/password entries). For the optimal level of security, do not select this option and define a username for the Daemon Administrators parameter. If unchecked, then no workspace can have an anonymous client. If checked, then a particular workspace allows anonymous clients. Enables login passwords to be cached. This enhances performance by reducing login times for future connections from the same client in a session. Indicates the encryption method used to send information across the network. The default is an asterisk (*), meaning that all methods are acceptable. If an encryption method is specified, it must be used. Currently, AIS supports the RC4 and DES3 protocols. Enter the authentication domain name.

Cached passwords for performance Encryption methods

Domain name for authentication


1

The name is prefixed with@, to utilize the operating system GROUP feature.

Administering Selected User Only Lists


You can give specific users or groups rights to access or administer workspaces. Add the users in the Daemon security tab or the WS Security tab. In the WS Security tab, you can add users in both the Workspace access and Administration sections. To add users or groups 1. In the Daemon security tab, or the WS Security tab Administration or Workspace access section, select Selected users only.
2.

Click Add user and enter the name of a valid user in the Add user dialog box. Make sure that the name entered matches a valid user account.
Setting up Daemons 4-9

To add groups to the list, click Add group and enter the name of a valid group in the Add group dialog box. Make sure that the name entered matches a valid group account.
3.

Click OK. The name of the user or group is added to the field.

1. 2. 3.

To rename a user or group Select the user or group you want to rename and click Rename. Change the name entered in the Rename user or Rename group dialog box to the name you want to use. Click OK. The changes are entered in the field.

1. 2.

To remove a user or group Select the use or group that you want to remove. Click Remove. The user or group is removed from the field.

Reloading Daemon Configurations at Runtime


To make changes to daemon and workspace definitions, you must first reload the configuration and stop any existing servers with the old configuration. To make sure that the daemon configurations are reloaded, you can reload them from the Runtime Explorer view. For more information, see Runtime Explorer Tasks.

Editing Daemon Configurations


You can also edit the daemon configurations at runtime. This is done in the Attunity Studio Runtime Manager Perspective. For information on how to select a different perspective, see Selecting a Perspective. To edit the daemon configuration In the Runtime Manager perspective Configuration view, right-click a daemon and select Edit Daemon Configuration. The Daemon editor opens on the right of the workbench. Make changes for each of the tabs as you would if you were editing the daemon from the Design perspective. See Editing a Daemon above.

Checking the Daemon Status


You can check the status of a Daemon at any time. You can receive information on the number of logins, active daemon clients and other information. The information is presented in a dialog box in Attunity Studio or by an on-screen display Using NAV_ UTIL Utility.

Checking the Daemon Status with Attunity Studio


You can check the daemon status in Attunity Studios Runtime Manager perspective. To check a daemons status 1. From the perspective toolbar at the top right of the workbench, select Runtime Manager.

4-10 AIS User Guide and Reference

2.

Right click the daemon and select Status. A dialog box with the daemon status is displayed. The daemon status contains the following information:

Daemon platform IRPCD process ID IRPCD log file IRPCD configuration Logging detail Number of logins Number of active daemon clients Number of active client sessions Max. number of concurrent client sessions

Starting and Stopping Daemons


Daemons start automatically when the system starts. You must be sure that the system is configured correctly to make sure that the daemon starts up as expected. You can also start and stop a daemon manually. For more information, see Starting and Stopping Daemons.

Starting a Daemon in Attunity Studio


You cannot start a daemon manually in Attunity Studio. To start a daemon manually, you can use the NAV_UTIL. For more information, see Starting and Stopping Daemons.

Shutting Down a Daemon in Attunity Studio


You can shut down the daemon on any machine defined in Attunity Studio, from the Studio Runtime Manager perspective. To shut down the daemon using Attunity Studio 1. From the perspective toolbar at the top right of the workbench, select Runtime Manager.
2. 3.

In the Runtime Explorer view, expand the Daemon folder. Right-click the daemon you want to shut down and select Shutdown Daemon.

Sample Daemon Configuration


The following example shows a daemon configuration for a workspace managing orders (ACMEOrdersServer) and a workspace used for reporting (ACMEReportingServer):
<daemons> <daemon name="IRPCD"> <workspaces> <workspace name="Navigator" description="A Navigator Server" workspaceAccount="orders" startupScript="machine_dependent"1

Setting up Daemons 4-11

serverMode="reusable" serverLogFile="%l.xxx%i" reuseLimit="20" nAvailableServers="10" minNAvailableServers="4" anonymousClientAllowed="false" administrator="*" /> <workspace name="ACMEReportingServer" workspaceAccount="report" startupScript="machine_dependent"1 serverMode="singleClient" serverLogFile="%l.xxx%i" nAvailableServers="3" minNAvailableServers="1" anonymousClientAllowed="false" administrator="*" /> </workspaces> <control "ServerLogfile=/users/nav/%A%U%P.log" /> <security anonymousClientAllowed="false" administrator="sysadmin" /> <logging logFile="irpcd.log" logClientDomain="0" detail="errors" /> </daemon> </daemons>

Note:

The daemon configuration is displayed in Attunity Studio by editing the daemon, in the Source tab.

Adding and Editing Workspaces


A Daemon must have one or more workspaces. Workspaces define the server processes and environment used for the communication between the client and server throughout a client request. A workspace definition is set in the Attunity Studio Design perspective Configuration view.

Adding a Workspace
When you define a new Workspace, you can copy the values of an existing workspace on the same Daemon or have AIS set its default values. To add a new workspace 1. Open Attunity Studio.
2. 3. 4.

In Design Perspective Configuration view, expand the Machine folder and then expand the machine with the daemon where you want to add the workspace. Expand the daemon folder. Right-click the daemon where you want to add the workspace and select New Workspace.

The startupScript values in these examples is machine dependent. For example, for z/OS the startup script might be startupScript=ATTSRVR.AB, for OpenVMS startupScript="dka0:[user.orders]NAV_SERVER.COM" and for Windows startupScript="nav_util svc"

4-12 AIS User Guide and Reference

Figure 45 New Daemon Workspace

Note:

You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.

5.

In the New Daemon Workspace window, enter the following:

Name: The name used to identify the workspace. The workspace name is made up of letters, digits, underscores (_) or hyphens (-)
Note:

On machines running HP NonStop or z/OS, limit the name of a workspace to five characters so that the system environment file, workspaceDEF, does not exceed eight characters. Workspace names greater than five characters are truncated to five character and the default workspace, Navigator, will look for a system environment called NavigDEF.

Description: A short description of the workspace.

6.

From the Workspace data section, select one of the following


Create empty workspace with default values Copy properties from another workspace If you copy the properties from another workspace, the fields below the selection become active. You must indicate the workspace from where you want to copy the properties. Enter the following information:

Setting up Daemons 4-13

<name of the workspace> in <name of the daemon where the workspace is located> on <name of machine where the daemon is located>. or you can click the browse button and browse to select the workspace you want to use. The above information is added automatically.
7.

Click Next to open the Select Scenario window. Select the type of applications the daemon works with from the following options:

Application server using connection pooling. Stand-alone applications that connect and disconnect frequently. Applications that require long connections, such as reporting programs and bulk extractors. Custom (configure manually). If you select this option, the Workspace editor opens. See

8.

Click Next to open the next window. Select one of the following. The options available depend on the scenario selected:

The minimum number of server instances available at any time: This is the minimum number of connections that are available at any time. If the number of available connections drops below this number, the system will create new connections. (Available if you select Stand-alone applications that connect and disconnect frequently). The maximum number of server instances available at any time: This is the maximum number of connections that are available at any time. If the number of connections used reaches this number, no additional server connections can be made. (Available if you select Stand-alone applications that connect and disconnect frequently). The average number of expected concurrent connections: This lets the system know how much the average load will be and helps to distribute the resources correctly. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). The maximum number of connections: This is the most connections that will be available. If the number of requests exceeds this number, an error message is displayed that informs the user to try again when a connection becomes available. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). How many connections you want to run concurrently. This sets the number of connections that will run at the same time. (Available if you select Applications that require long connections, such as reporting programs and bulk extractors).

9.

Click Next and enter the amount of wait time for the following parameters. If your system is not too overloaded, you can leave the default times.

How long to wait for a new connection: Enter the amount of time (in seconds) to wait for a connection to be established before the system times out. For example if you want a wait time of one minute enter 60 (the default). If you enter 0, the time is unlimited. How long to wait for a response that is usually fast: Enter the time (in seconds) to wait for a response from the system before the system times out. For example if you want to wait for one minute, enter 60. The default is 0, which indicates unlimited wait time.

4-14 AIS User Guide and Reference

10. Click Next to open and enter the workspace security information in this window.

You can determine which users or groups can access the workspace you are defining. For more information, see <xref to Managing Security>.
11. Click Next to open the summary window. Review the summary to be sure that all

the information entered is correct. If you need to make any changes, click Back to go back to previous steps and change the information.
12. Click Finish to close the wizard and add the new workspace to the Configuration

view.

Editing a Workspace
After you add a Workspace, you can make changes to the workspaces configuration. You can edit the information in the following workspace editor tabs:

General: Specifies general information including the server type, the command procedure used to start the workspace, the binding configuration associated with this workspace (which dictates the data sources and applications that can be accessed) the timeout parameters, and logging information. Server Mode: Contains the workspace server information including features that control the operation of the servers started up by the workspace and allocated to clients. Security: Contains administration privileges, user access, ports available for access to the workspace and workspace account specifications.

1. 2. 3. 4. 5.

To edit a workspace Open Attunity Studio. In the Design Perspective Configuration view, expand the Machines folder and then expand the machine where you want to edit the workspace. Expand the daemon folder. Expand the daemon with the workspace you want to edit. Right-click the workspace you want to edit and select one of the following:

Workspace Setup Wizard: Opens the wizard that was used to add a new workspace (see Adding a Workspace). Make any required changes to the wizard settings to change the workspace definition. Open: Opens the editor. The editor includes the information that was entered in the New Workspace wizard. Click the following tabs to edit the information: General Server Mode Security

Note:

The default daemon configuration supplied with AIS includes the default Navigator Workspace. This workspace is automatically used if no workspace is selected.

Setting up Daemons 4-15

General
You enter general information about the Workspace operations in the General tab. This information includes the server type, the command procedure used to start the workspace and the binding configuration associated with this workspace. The following figure shows the General tab:
Figure 46 The General Tab

Notes:

You can also change daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager. For z/OS logging, the default is to write the log entries to the job only

The table below shows the General tab fields:

4-16 AIS User Guide and Reference

Table 44 Field Info

General Tab Description

Workspace name

The name used to identify the workspace. Note: The default configuration includes the default Navigator workspace. This workspace is automatically used if a workspace is not specified as part of the connection settings.

Description Startup script

A description of the workspace. The full path name of the script that starts the workspace server processes. The script specified here must always activate the nav_ login procedure and then run the server program (svc). If you do not specify the directory, the startup procedure is taken from the directory where the daemon resides. AIS includes a default startup script, which it is recommended. Enter the script name only because the server is activated as a started task. The workspace server type:

Server type

IMS Events Java Native External

Workspace binding name

The name of a specific binding configuration on the server machine that you want to use with this workspace. Notes:

For HP NonStop the name of the binding must be five characters or less. For z/OS the name of the binding must be five characters or less and the name must be surrounded by single quotes. If the high-level qualifier is not specified here, NAVROOT.DEF is assumed, where NAVROOT is the high-level qualifier specified when Attunity Server is installed.

Workspace database name

Enter a name of a virtual database that this workspace accesses if applicable. A virtual database presents a limited view of the available data because only selected tables from either one or more data sources are available, as if from a single data source. For more information, see Using a Virtual Database. If a value is entered in this field, only the virtual database can be accessed using this workspace. Note: Entering a value in this field restricts access from a JDBC Client Interface, ODBC Client Interface or OLE DB (ADO) Client Interface during runtime.

Timeout parameters

The following properties define the time the client waits for the workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond. Entering a timeout value for these properties overrides the default setting entered in the Control tab. The maximum amount of time a workspace client can be idle before the connection with the server is closed. The time the client waits for a workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond.

Client idle timeout Connect timeout

Setting up Daemons 4-17

Table 44 (Cont.) General Tab Field Call timeout Description The timeout period for short calls for all workspaces. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in Call timeout field for the daemon configuration. Logging and Trace Options Specific log file format Defines the name and location of the server log file if you want the data written to a file instead of SYSOUT for the server process. The parameter must specify the name and the high level qualifier. You can enter the following wildcards in this field to generate the following information:

%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)

Logging

Specifies the type of tracing. The following tracing options are available:

No timeout: Select this to disable the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Call trace: Select this to generate a message in the server log file for each RPC function called. This is useful for troubleshooting the server. RPC trace: Select this to enable debugging messages on the server. Sockets: Select this to generate a message in the server log file for each socket operation. This is useful for troubleshooting client/server communication by providing a detailed trace of every client/server communication. Extended RPC trace: Select this to generate a more detailed message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. System trace: Select this to generate operating system-specific tracing. Timing: Select this to generate a timestamp for every entry to the server log file.

Query governing restrictions Max Number of Rows in a Table That Can Be Read Select the maximum number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error.

4-18 AIS User Guide and Reference

Table 44 (Cont.) General Tab Field Max Number of Rows Allowed in a Table Before Scan is Rejected Description Select the maximum number of table rows that can be scanned. This parameter has different behavior for query optimization and execution.

For query optimization, the value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). For query execution, a scan is limited to the value set. When the number of rows scanned exceeds the number entered, the query returns an error.

Server Mode
You enter the features that control the operation of the servers started up by the workspace and allocated to clients in the Server Mode tab. For example, you can configure the Workspace to use connection pooling and to start up a number of servers for future use, prior to any client request, instead of starting each server when a request is received from a client.
Figure 47 The Server Mode Tab

Setting up Daemons 4-19

Notes:

You can also change Daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager.

The table below describes the fields in the Server Mode tab:
Table 45 Field Workspace server mode Server Mode Tab Description Specifies the type of new server processes that the daemon starts up. The daemon supports the following server modes:

singleClient: Each client receives a dedicated server process. The account in which a server process runs is determined either by the client login information or by the specific server workspace. This mode enables servers to run under a particular user account and isolates clients from each other, as each receives its own process. However, this server mode incurs a high overhead due to process startup times and can use a lot of server resources as it requires as many server processes as concurrent clients.

multiClient: Clients share a server process and are processed serially. This mode has low overhead because the server processes are already initialized. However, because clients share the same process, they can impact one another, especially if they issue lengthy queries. The number of clients that share a process is determined by the Clients per server limit field. Notes: This mode is not available on HP NonStop machines. Do not use this property when accessing a database that supports two-phase commit through XA.

multiThreaded (Windows only): Clients are allocated a dedicated thread in a shared server process. This mode has low overhead since the servers are already initialized. However, because clients share the same process, they may impact one another, especially if the underlying database is not multi-threaded. The number of multi-threaded clients that share a process is set in the Clients per server limit field (the maximum number of concurrent clients a server process for the current workspace accepts) in the Attunity Studio Design perspective configuration tab. This value is set in the daemon configuration settings maxNClientsPerServer parameter. Notes: Multiple multi-client and multi-threaded servers can be started at the same time for optimal performance. Do not use this property when accessing a database that supports two-phase commit through XA.

4-20 AIS User Guide and Reference

Table 45 (Cont.) Server Mode Tab Field Description

reusable: An extension of single-client mode. Once the client processing finishes, the server process does not die and can be used by another client, reducing startup times and application startup overhead. This mode does not have the high overhead of single-client mode because the servers are already initialized. However, this server mode can use a lot of server resources as it requires as many server processes as concurrent clients. Do not use this mode with a database that supports two-phase commit through XA. In this case, define a new workspace for the data source, so that all the other data sources you are accessing use reusable servers. The other modes can be set so that the server processes are reusable. The number of times a process can be reused is controlled by the Reuse limit field value in Attunity Studio (the maximum number of times a server process can be reused or how many clients it can server before it finishes). Reuse of servers enhances performance since it eliminates the need to repeat initializations. However, reuse runs a risk or using more memory over time. The default for the Reuse Limit field value is 0, which means that there is no limit.

Port range

Select the range for specific firewall ports through which you access the workspace. Determines the range of ports available for this workspace when starting server processes. Use this option when you want to control the port number, so that Attunity Connect can be accessed through a firewall. Enter the port range in the following fields:

From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range

Use default port range

Select this to use the port range that is defined in the daemon. This is defined in the Port range for servers field in the daemon Control tab.

Maximum number of Enter the maximum number of server processes that can run at the server processes same time. Limit server reuse Select this if you want to limit the number of servers that can be reused. If this is selected, the Reuse limit parameter is available. If Limit server reuse is selected, in the field next to the check box, enter the maximum number of times a server can be reused. Select the maximum of clients accepted in a server process. A one-client server can be reused after its (single) client has disconnected. Reuse of servers enhances startup performance because it avoids the need to repeat initialization. This parameter is not available if the Limit server reuse parameter is not selected. This parameter is not available if the server mode value is singleClient. Limit concurrent clients per server Select this to limit the number of clients that a server can accept for the current workspace process. If this is not selected, the number of clients is unlimited.

Setting up Daemons 4-21

Table 45 (Cont.) Server Mode Tab Field Description If Limit concurrent clients per server is selected, in the field next to the check box, enter the maximum number of clients that a server process for the current workspace accepts. The default for this field is None, indicating that the number of clients for each server is unlimited. This field is available if the server mode value is multiClient or multiThreaded. Specify Server Priority Set the priority for servers. For example, a workspace for applications with online transaction processing can be assigned a higher priority than a workspace that requires only query processing. The lower the number, the higher the priority. For example, workspaces with a priority of 1 are given a higher priority than workspaces with a priority of 2. Note: This is unavailable if Use default server priority is selected. Use default server priority Keep when daemon ends Sets the priority to 0. There is no specific priority for this workspace. Clear this check box to set a priority in the Specify Server Priority parameter. Select this to kill all servers started by that daemon when a daemon is shutdown, even if they are active. Select this if you want the servers for the workspace to remain active, even after the daemon has been shut down. If selected, it is the responsibility of the system operator or manager to ensure that the servers are eventually killed. This must be done at the system level.

Server Provisioning Number of prestarted Initial number of servers: The number of server processes that are servers in pool prestarted for this workspace when the daemon starts up. When the number of available server processes drops lower than the value specified in the Minimum number field, the daemon again starts server processes until this number of available server processes is reached. The default for this field is 0. Number of spare servers The minimum number of server processes in the prestarted pool before the daemon resumes creating new server processes (to the value specified in the Initial number of servers field). If this field is set to a value higher than the Initial number of servers field, the daemon uses the value specified in the Initial number of servers field. The default for this field is 0. The maximum number of available server processes. Once this number is reached, no new nonactive server processes are created for the particular workspace. For example, if a number of server processes are released at the same time, so that there are more available server processes than specified by this field, the additional server processes higher than this value are terminated. The default for this field is zero, meaning that there is no maximum.

Prestarted server pool limit

Security
Configure the security level for a workspace in the Workspace editor Security tab. This lets you set the security options for the workspace only. To set security on the daemon level, see Security. The Security tab is used:

To grant administration rights for the workspace To determine access to the workspace by a client

The following figure shows the Security tab:

4-22 AIS User Guide and Reference

Figure 48 The Security Tab

The following table describes the fields in this tab:


Table 46 Field Server Account Security Tab Description This section defines the users (accounts) allowed to access the workspace, firewall access ports, workspace account, and anonymous login permissions. Select this if you want to define the operating system account used for the workspace. If selected, enter the name of the workspace account in the workspace account field. If not selected, the account name that was provided by the client is used. Allow anonymous clients to use this workspace Select this if you want to allow this workspace to be invoked without authentication. If selected, enter the name of the workspace account in the Server account to use with anonymous clients field.

Use specific workspace account

Setting up Daemons 4-23

Table 46 (Cont.) Security Tab Field Authorized Workspace users Description Indicate which users have permission to use the workspace. Select one of the following

All users: Any user who has logged on to the daemon may use the workspace Selected users only: Select this to allow only users (or accounts) with specific permission to use the workspace. When this is selected, add the names of users (or accounts) and groups that can be use the workspace in the field below. See Administering Selected User Only Lists for information on adding users and groups to the field. Note: If no user is specified, any user who has logged on to the daemon may use the workspace.

Authorized Administrators

Identifies the users (accounts) with administrator privileges. Select one of the following:

All users: Indicates that anyone can access the workspace and change the settings. Selected users only: Select this to allow only users (or accounts) with specific permission to be administrators. When this is selected, add the names of users (or accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is specified, any user who has logged on to the daemon may administrator this workspace.

Note:

You can also use the Allow Listing parameter. Select this if you

want this workspace to appear in the list of workspaces.

To set this parameter you must use the XML view. Right-click the daemon with the workspace you are working with and select Open as XML. Find this parameter in the XML editor to change it. For more information, see Editing XML Files in Attunity Studio.

Selecting a Binding Configuration


After you add a Workspace, you can change the binding that is associated with the workspace, if you dont want to use the default NAV binding. If you want to use a Binding configuration other than the default (NAV) configuration, you can select a different required binding configuration from the Workspace editor General tab Workspace binding name list.
Notes:

You can have more than one workspace, each with different binding configurations. If you want to use a binding other than the default binding, on UNIX and OpenVMS platforms you can select the binding as part of the startup script for the workspace.

4-24 AIS User Guide and Reference

Disabling a Workspace
Workspaces can be disabled. When you disable a workspace, server processes are not started and a client requesting the disabled workspace receives an error. To disable a workspace in Attunity Studio 1. In the Design perspective Configuration view, right-click the workspace you want to disable.
2.

Select Disable.

Setting Workspace Authorization


Once a machine is defined in Attunity Studio, you can authorize which users can view and edit rights for specific workspaces. To assign authorization rights to a workspace: 1. Right-click the workspace in the Attunity Studio Design perspective Configuration view and select Set Authorization.
2.

Enter the User name and Password for the user with authorization rights for this workspace. For more information about setting rights and security privileges, see Security.

Setting up Daemons 4-25

4-26 AIS User Guide and Reference

5
Managing Metadata
This chapter includes the following sections:

Data Source Metadata Overview Importing Metadata Managing Metadata Using Attunity Metadata with AIS Supported Data Sources Procedure Metadata Overview Importing Procedure Metadata Using the Import Wizard Procedure Metadata Statements ADD Supported Data Types ADD Syntax

Data Source Metadata Overview


Metadata defines the structure of the data and where it is located. Attunity Connect relies on the native Metadata of the data source when connecting to relational data sources (such as Informix, Oracle and Sybase) and some File-system Data Source (such as Adabas, using Predict). For other data sources whose metadata is not readable by Attunity Connect or which do not have metadata, Attunity Connect requires its own metadata. Attunity metadata is stored in a proprietary data dictionary called the Attunity Data Dictionary (ADD). You need Attunity metadata for each record (referred to as a table) in the data source. The Attunity metadata definition for a record is viewable and updateable via the Design perspective Metadata tab of Attunity Studio or via an XML file. The Attunity metadata for each Data Source is stored in a repository for the data source, on the machine where the data resides. The following require Attunity metatdata:

CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources (z/OS Only) RMS Data Source (OpenVMS Only)

Managing Metadata 5-1

Text Delimited File Data Source VSAM Data Source (z/OS) (VSAM Under CICS and VSAM Drivers (z/OS Only)) OLEDB-FS (Flat File System) Data Source

An Adabas C Data Source is available if the Adabas Predict metadata is not available (or efficient). Metadata can be imported, saved and managed in the Attunity Studio Design perspectives Metadata tab.

Importing Metadata
You can use the Attunity Studio Import wizards or standalone import utilities to generate metadata. If an import wizard or standalone utility is not available, you can create the metadata manually.

Importing Metadata Using an Attunity Studio Import Wizard


The following data source drivers have wizards in Attunity Studio for importing Metadata.

Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Batch CDC (z/OS Platforms) (VSAM Under CICS and VSAM Drivers (z/OS Only))

For other data source drivers, if a COBOL copybook is available, metadata can be generated from the COBOL in Attunity Studio. Otherwise, the metadata has to be manually defined in the Attunity Studio Design perspective Metadata tab. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can save an import procedure and use it again.
Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata

Each data source has different import requirements. Some may have different amount steps than others and require different information. Each wizard guides you through the import. The input files for each import (such as COBOL copybooks) are needed during the import to define the input and output structures used by the application adapter. These files are sent to the machine running Attunity Studio using the FTP protocol, as part of the import procedure. For additional information on importing metadata, refer to the specific data source.

5-2 AIS User Guide and Reference

Importing Metadata Using a Standalone Utility


The following third-party standalone utilities are available:

Adabas DDM import (DDM_ADL): This utility produces Attunity metadata from non-Predict metadata (nsd files) BASIC mapfiles import (BAS_ADL): This utility produces Attunity metadata from BASIC mapfiles. DBMS Import (DBMS_ADL): produces Attunity metadata from a DBMS database. HP NonStop Enscribe Import (ADDIMP): This utility produces Attunity metadata for HP NonStop Enscribe data from a DDL subvolume and/or COBOL copybooks. You can generate the metadata from COBOL copybooks in Attunity Studio. HP NonStop Enscribe Import (TALIMP): This utility produces Attunity metadata for HP NonStop Enscribe data sources from TAL datafiles and a DDL subvolume. MS CDD Import (CDD_ADL): This utility extracts the information stored in an RMS CDD directory into ADD metadata. You can generate the metadata from COBOL copybooks in Attunity Studio.

Managing Metadata
You manage metadata in Attunity Studio. Click the Metadata tab in the Design perspective to view and modify Attunity Metadata for Data Sources. To view and modify metadata for data sources In the Design perspective Configuration view, right-click the data source for which you want to manage the metadata. Select Edit metadata from the shortcut menu. The Metadata tab opens with the selected data source displayed in the tree.
Note:

1. 2.

You can also open the Metadata tab and right-click the Data sources folder in the Metadata view to add the data source that you want to import metadata for to the tree.

3.

Right-click the resource (such as the data source table) in the Metadata view and select Edit.

Data source tables are edited using the following tabs, which are at the bottom of the screen:

General Tab: Defines general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Specifies the table columns and their properties. For example, the column data type, size and scale. Indexes Tab: Enables you to specify the indexes of a table. The indexes are described by the order of the rows they retrieve and the data source commands used and the index type. Statistics Tab: Enables you to specify statistics for the table, including the number of rows and blocks of the table.

Managing Metadata 5-3

Note:

Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources. For example, the metadata for an RMS record is referred to as the metadata for an RMS table.

Source Tab: The metadata in its XML representation.

Using Attunity Metadata with AIS Supported Data Sources


The native Metadata of other Data Sources, which is used by Attunity Connect (such as Relational Data Sources like Oracle) can also be viewed in the Design perspective metadata tab of Attunity Studio. However, this metadata cannot be edited. Some native metadata does not include information that Attunity Connect requires to fully optimize query execution (for example, the number of rows and blocks in an Rdb table). In this sort of situation you can extend the native metadata by adding these extensions in Attunity metadata.

Extended Native Data Source Metadata


When native metadata lacks some features provided by Attunity metadata, performance can be improved by extending the native metadata with ADD metadata. For example, you can use the Attunity metadata to specify the number of rows and blocks in an RDB table. When accessing the data, Attunity Connect uses both the native metadata and this extended metadata. Extended metadata is managed in Attunity Studio. To assign extended metadata for a data source 1. Display the metadata for the data source in the Design perspective Metadata tab of Attunity Studio
2.

Change the relevant values for the table to be extended in the Statistics tab. The table symbol in the tree is marked with an asterisk to show that the metadata has been extended.
Note:

The information in other tabs are for reference only and cannot be edited. The noExtendedMetadata property in the data source definition in the binding configuration is set to false.

You can sometimes improve performance using Attunity metadata instead of the native metadata. In this case, you can export a snapshot of the native metadata to Attunity Connect and use this local copy of the native metadata when accessing the data source.

Native Metadata Caching


When access to a data source via its native metadata is slow but the metadata is static, performance can be improved by creating a local copy (snapshot) of the data source metadata and then run queries using this metadata instead of the data source metadata.

5-4 AIS User Guide and Reference

Examples of when this is beneficial include when native metadata is not up to date or information, such as statistics, are not available. You can see a snapshot of the metadata in Attunity Studio. To make a copy of data source metadata 1. Display the metadata for the data source in the Design perspective Metadata tab of Attunity Studio.
2. 3. 4.

Right-click the data source and select Manage Cached Metadata from the popup menu. Select the tables that you want to use a local copy and move them to the right pane. Click Finish. The tables are displayed under the data source.
Note::

The table symbol changes from the relational data source symbol to a data source symbol that requires Attunity metadata. The localCopy property in the data source definition in the binding configuration is set to true.

Only the tables that have cached metadata are displayed in the tree. To revert back to using non-cached metadata, you can either right-click individual tables and choose Delete Cached Table from the popup menu or, for all the tables, right-click the data source and choose Set Metadata followed by Native Metadata from the popup menu.
Note:

If the native metadata change, then from using a snapshot of the native metadata is not recommended.

Procedure Metadata Overview


Procedure Metadata defines the input and output structures that are passed to and returned from the procedure. The metadata can be generated using an import wizard in Attunity Studio for the CICS Procedure Data Source and the Procedure Data Source (Application Connector) based on COBOL copybooks. For the Natural/CICS Procedure Data Source (z/OS), the metadata has to be defined in Attunity Studio Design perspective Metadata view. Attunity metadata is stored in a proprietary data dictionary called the Attunity Data Dictionary (ADD). You need Attunity metadata for each function in the procedure. The Attunity metadata definition for a procedure is viewable and updateable via the Design perspective Metadata tab of Attunity Studio or via an XML file. The Attunity metadata for each procedure is stored in a repository for the procedure, on the machine where the procedure resides.

Importing Procedure Metadata Using the Import Wizard


Import wizards are available for the following:

CICS Procedure Data Source Procedure Data Source (Application Connector)

Managing Metadata 5-5

For other procedures, the metadata is created manually, as described in Manually Creating Procedure Metadata. If COBOL copybooks describing the procedure input and output structures are available, you can import the Metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can also save an import procedure for reuse.
Note:

Imported metadata is independent of its origin. Changes made to the source metadata after the import, are not made to the Attunity metadata.

The following information is required during the import:

The input files for the import (such as COBOL copybooks) to define the input and output structures used by the Application Adapter These files are copied to the machine running Attunity Studio as part of the import procedure. The names of the applications (such as the IMS/TM transaction or CICS program) to be executed via the procedure.

Procedure Metadata Statements


Metadata can be viewed and modified in the Attunity Studio Design perspective Metadata view. For information on how to manage metadata in Attunity Studio, see Working with Procedure Metadata. You build the metadata with various statements. This section describes the statements used for creating and managing procedure metadata. The procedure syntax is composed of the following statements:

The <procedure> Statement The <parameters> Statement The <dbCommand> Statement The <dbCommand> Statement The <fields> Statement

The <procedure> Statement


A procedure definition begins with a <procedure> statement. This statement consists of the following components:

An attribute list. Input parameters for procedures A <field> statement which includes the fields list.

Syntax
<procedure name="proc_name" attribute="value" ...> <dbCommand>...</dbCommand>

5-6 AIS User Guide and Reference

<fields> <field name="field_name" attribute="value" .../> ... </fields> <parameters> <field name="param" attribute="value" ... /> ... </parameters> </procedure>

Where proc_name is the procedure name, up to a maximum of 40 characters.


Notes:

The proc_name entry must conform to standard ANSI 92 SQL naming conventions. You must include a <field> statement. Use a <parameters> statement to specify input parameters.

Example 51 <procedure> statement <procedure name="math_simple" filename="prc_samples"> <dbCommand>LANGUAGE=C</dbCommand> <fields> <field name="sum1" datatype="int4"> <dbCommand>order=1</dbCommand> </field> <field name="subtract" datatype="int4"> <dbCommand>order=2</dbCommand> </field> <field name="multiply" datatype="int4"> <dbCommand>order=3</dbCommand> </field> <field name="divide" datatype="int4"> <dbCommand>order=4</dbCommand> </field> </fields> <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters> </procedure>

<procedure> Attributes
The attributes are listed in the following table:
Table 51 Attribute Name alias <procedure> Attributes Syntax alias="name" Description Replaces the procedure name with a logical procedure name. Names greater than 39 characters are truncated from the left.

Managing Metadata 5-7

Table 51 (Cont.) <procedure> Attributes Attribute Name description Syntax description="optional_ user_supplied_ description" Description Specifies an optional textual description.

filename

filename="full_filename" Specifies the full name and location of the file. where full_filename includes the full path to the file.

name

name="name"

Specifies the name of the procedure. This attribute must be specified.

The <parameters> Statement


The <parameters> statement specifies a list of input parameters for a procedure, and each parameter is defined by a <field> statement.

Syntax
<parameters> <field name="param" attribute="value" ...> <dbCommand>...</dbCommand> </field> <field name="param" attribute="value" .../> ... </parameters>

Example 52 <parameters> statement <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters>

The <dbCommand> Statement


The <dbCommand> statement is used to specify procedure-specific commands for the metadata.

Syntax
<dbCommand>text</dbCommand>

Example 53 <dbCommand> statement <dbCommand>mechanism=value; order=5</dbCommand>

5-8 AIS User Guide and Reference

The <fields> Statement


The <fields> statement is used to list the field descriptions of fields in a table or procedure. A field description can be one of the following:

The <field> Statement The <group> Statement The <variant> Statement

Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>

For details of the specific syntax requirements for a procedure, see the specific procedure.

The <field> Statement


The <field> statement defines the characteristics of a field that is not made up of other fields.

Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>

Example 54 <field> statement

The following code defines one field (N_NAME) and its two attributes (data type and size):
<field name="n_name" datatype="string" size="25" />

Managing Metadata 5-9

<field> Attributes
The attributes are listed in the following table:
Table 52 Attribute Name datatype <field> Attributes Syntax datatype="datatype" Description Specifies the data type of a field. For the supported data types, see ADD Supported Data Types. scale scale="n" Specifies the number of characters or digits. For example: <field name="SALARY" datatype="numstr_s" size="10" scale="2" /> size size="n" where n is the number of characters or digits. The digit must be greater than 0. name name="name" Specifies the name of the field. This attribute must be specified. For example: <field name="EMP_ID" datatype="int4" /> Specifies the size of the field.

The <group> Statement


The <group> statement defines the characteristics of a field that is made up of other fields, such as an array in a record.

Syntax
group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>

A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.
Example 55 <group> statement <procedure name='math_all_structs' filename='prc_samples'> <dbCommand>LANGUAGE=C</dbCommand> <fields> <group name='MATH_STRUCT'> <dbCommand>ORDER=1</dbCommand> <fields> <field name='SUM1' datatype='int4'/> <field name='SUBTRACT' datatype='int4'/> <field name='MULTIPLY' datatype='int4'/> <field name='DIVIDE' datatype='int4'/> </fields> </group> </fields> <parameters> <group name='MATH_IN_STRUCT' > 5-10 AIS User Guide and Reference

<dbCommand>ORDER=2</dbCommand> <fields> <field name='OPER1' datatype='int4'/> <field name='OPER2' datatype='int4'/> </fields> </group> </parameters> </procedure>

<group> Attributes
The attribute is listed in the following table:
Table 53 Attribute Name name <group> Attributes Syntax name="name" Description Specifies the name of the field. This attribute must be specified. For example: <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields>...</fields> </group>

The <variant> Statement


Variants are similar to redefine constructs in COBOL and to union in C. The basic concept is that the same physical area in the buffer is mapped several times. The mappings can be of the following:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variants:

Variant without selector Variant with selector

Variant without selector


Variants without selectors are used to define different cases of the variants and represent different types of looking at the same data. The use of this type of variant is discouraged. It is recommended to pick which case is more convenient to work with when consuming the data and remove the other cases.
Example 56 COBOL 20 PARTNUM PIC X(10). 20 PARTCD REDEFINES PARTNUM. 30 DEPTCODE PIC X(2). 30 SUPPLYCODE PIC X(3). 30 PARTCODE PIC X(5).

Managing Metadata

5-11

In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation screen enables you to replace any variant with the fields of a single case. The metadata generated following a metadata import, appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant>

Variant with selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.
Example 57 COBOL 10 ORDER. 20 RECTYPE PIC X. 88 ORD-HEADER VALUE H. 88 ORD-DETAILS VALUE D. 20 ORDER-HEADER. 30 ORDER-DATE PIC 9(8). 30 CUST-ID PIC 9(9). 20 ORDER-DETAILS REDEFINES ORDER-HEADER. 30 PART-NO PIC 9(9). 30 QUANTITY PIC 9(9) COMP.

In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction.

5-12 AIS User Guide and Reference

This is true, unless a selector is specified in the import manipulation screen. Refer to Attunity Studio Guide and Reference for additional information.

ADD Syntax
The following is the ADD syntax to use for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>

The metadata generated by Attunity Studio following a metadata import appears as follows:
<filed name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9" </fields> </case <case name="ORDER_DETAILS" value="D" <fields <field name="PART_NO" datatype="numstr_u" size "9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields </case> </variant>

Usage Notes

From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the User to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.

Resolving Variants in Attunity Studio


In Attunity Studio, variants are resolved in the data source metadata, Import Manipulation screen. To resolve variants in using the Import Manipulation screen 1. In the Validation tab, double-click the variant to resolve. A screen opens, listing the variants in the COBOL copybook.

Managing Metadata

5-13

Note:
2.

You can expand the variants to expose the variant cases.

Right-click the required variant and select Structures, and then select Mark selector. The Select Selector screen opens.

3. 4.

Select the selector for the variant from the list of available selectors in the COBOL copybook, and then click OK. Repeat for all the required variants, and then click OK.

The <case> Statement


The <case> statement specifies an alternative definition that maps to the same storage area. This statement can include the following:

Syntax <case> Attributes

Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>

Example 58 <case> statement <variant name='VAR_DIVIDE_DATATYPE' selector='DIVIDE_DATATYPE'> <dbCommand>ORDER=5</dbCommand> <case name='CASE_1_1' value='L'> <fields> <field name='DIVIDE_LONG' datatype='int4'/> </fields> </case> <case name='CASE_1_2' value='F'> <fields> <field name='DIVIDE_FLOAT' datatype='single'/> </fields> </case> <case name='CASE_1_3' value='D'> <fields> <field name='DIVIDE_DOUBLE' datatype='double'/> </fields> </case> </variant>

<case> Attributes
The attribute is listed in the following table:

5-14 AIS User Guide and Reference

Table 54 Attribute Name name

<case> Attributes Syntax name="name" Description Specifies the name of the case. Note: When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here.

value

value="value"

Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. Note: When a selector attribute is specified in the <variant> statement, a value attribute must be specified here.

ADD Supported Data Types


The following table lists data types supported by Attunity Connect ADD:
Note:

Platform and data source-dependent data types run only on their respective platforms.

Table 55 ADD Type ada_d_time ada_decimal ada_numstr_s ada_time apt_date

ADD Supported Data Types ODBC Type SQL_ TIMESTAMP SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP SQL_DATE JDBC Type Details ADABAS date format z/OS ADABAS packed decimal z/OS ADABAS numeric string ADABAS timestamp format Date packed into a 4 character string. Format: DMYY Example: 23-July-1998 is represented by four bytes, 19, 98, 7, and 23

OLE DB Type DBTYPE_TIMESTAMP DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_TIMESTAMP DBTYPE_DBDATE

apt_time

DBTYPE_TIMESTAMP

SQL_ TIMESTAMP

ADD date-time format

Managing Metadata

5-15

Table 55 (Cont.) ADD Supported Data Types ADD Type based_date OLE DB Type DBTYPE_ DBTIMESTAMP ODBC Type SQL_ TIMESTAMP JDBC Type Details Customize this data type by defining an environment variable (UNIX) or logical (OpenVMS) having the form: NVDT_ BASEDDATE=yyyymmdd[/dttyp e[/dtlen[/multiplier]]] where:

yyyymmdd: start date dttype: the name of the data type (int4 is the default). dtlen: the number of digits in the data type (if not atomic). multiplier: the number of increments per day.

Example: 19700101/int4//24 specifies the number of hours since Jan 1 1970. binary bit DBTYPE_BYTES DBTYPE_I1 SQL_BINARY SQL_TINYINT Unknown data type, string type, length must be specified A single bit within a byte. Size: 1 byte Format: datatype="bit" onBit="n", where n specifies which bit (within a byte) the field uses. If more than one bit is defined, the additional bits may be defined sequentially within the same byte (or bytes, if the number of bits requires this much space)

5-16 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type bits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details A signed number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:

n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

The maximum number of bits you can map is 32. cstring DBTYPE_STR SQL_VARCHAR A null-terminated string of alphanumeric characters; maximum length must be specified. An extra byte is required for the null flag. A CorVision date-time format. ODBC date format. Date in a string having the form YYMMDD. Date in a string having the form YYYYMMDD. DB2 UDB date format (OS/400 machine) DB2 UDB date-time format (OS/400 machine). DB2 UDB time format (OS/400 machine). Packed decimal. Maximum number of digits: 31 Maximum fractions: 11 Length: int (number of digits/2) + 1 bytes dfloat DBTYPE_R8 SQL_DOUBLE Double floating point number (D_FLOAT). Size: 8 bytes Range: 0.29E-38 to 1.7E38 Precision: 16 digits

cv_datetime date date6 date8 db400_date

DBTYPE_ DBTIMESTAMP DBTYPE_DBDATE DBTYPE_DBDATE DBTYPE_DBDATE DBTYPE_ DBTIMESTAMP

SQL_ TIMESTAMP SQL_DATE SQL_DATE SQL_DATE SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_NUMERIC

db400_datetime DBTYPE_ DBTIMESTAMP db400_time decimal DBTYPE_ DBTIMESTAMP DBTYPE_NUMERIC

Managing Metadata

5-17

Table 55 (Cont.) ADD Supported Data Types ADD Type double OLE DB Type DBTYPE_R8 ODBC Type SQL_DOUBLE JDBC Type Details Double floating point number (G_FLOAT). Size: 8 bytes Range: 0.56E-308 to 0.90E308 Precision: 15 digits filler DBTYPE_BYTES SQL_BINARY Allocation for future use, string type; length must be specified. A fixed null-terminated string of numeric characters; length must be specified. An extra byte is required for the null flag IEEE double floating point number. IEEE single floating point number. Binary image (BLOB). Date in a four byte integer. Format: YYMMDD or YYYYMMDD Example: 23-Jul-1998 has the form: 980723 or 19980723. int1 DBTYPE_I4 SQL_TINYINT Signed byte integer. Size: 1 byte Range: -128 to +127 int2 DBTYPE_I2 SQL_SMALLINT Signed word integer. Size: 2 bytes Range: -32768 to +32767 int3 DBTYPE_I4 SQL_INTEGER Signed integer. Size: 3 bytes int4 DBTYPE_I4 SQL_INTEGER Signed long integer. Size: 4 bytes Range: -2147483648 to +2147483647 int6 DBTYPE_NUMERIC SQL_INTEGER Signed integer. Size: 6 bytes int8 DBTYPE_NUMERIC SQL_NUMERIC Signed quadword. Size: 8 bytes Range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 isam_decimal DBTYPE_NUMERIC SQL_NUMERIC CISAM and DISAM packed decimal.

fixed_cstring

DBTYPE_NUMERIC

SQL_NUMERIC

ieee_double ieee_float image int_date

DBTYPE_R8 DBTYPE_R4 DBTYPE_STR DBTYPE_DBDATE

SQL_DOUBLE SQL_REAL SQL_CHAR SQL_DATE

5-18 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type jdate OLE DB Type DBTYPE_DBDATE ODBC Type SQL_Date JDBC Type Details Julian date. Size: 2 bytes

Bits 0-6: (non-century) year Bits 7-15: day of the year

logical

DBTYPE_I4

SQL_INTEGER

Signed long integer. Values: 1 or true (not case sensitive) are inserted as 1. Any other value is false, and inserted as 0.

magic_pc_date magic_pc_time mvs_date mvs_datetime mvs_time nls_string numeric_ cstring

DBTYPE_ DBTIMESTAMP DBTYPE_ DBTIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_NUMERIC

SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_CHAR SQL_CHAR SQL_CHAR SQL_CHAR SQL_NUMERIC

Magic PC date format. Magic PC time format. z/OS date format. z/OS date-time format. z/OS time format. String based on language and driven by table. A null-terminated string of numeric characters; maximum length must be specified. An extra byte is required for the null flag. Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11 Note: the number of fractions includes the decimal point.

numstr_bdn

DBTYPE_NUMERIC

SQL_NUMERIC

numstr_lse

DBTYPE_NUMERIC

SQL_NUMERIC

HP NonStop signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11

numstr_nl

DBTYPE_NUMERIC

SQL_NUMERIC

Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11

Managing Metadata

5-19

Table 55 (Cont.) ADD Supported Data Types ADD Type numstr_nlo OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_NUMERIC JDBC Type Details Signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_nr DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Sign is the last character of the string. Maximum number of digits: 31 Maximum fractions: 11 numstr_s DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 The number must be right justified (for example, " 1234N" is -12345). The number can be left padded by either spaces or zeros. If a scale is provided, it is a fixed positional scale; no decimal point is provided in the data (for example, a value of "1234E" with scale 2 is interpreted as "123.45"). numstr_tse DBTYPE_NUMERIC SQL_NUMERIC HP NonStop signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_u DBTYPE_NUMERIC SQL_NUMERIC Unsigned numeric string. Maximum number of digits: 31 Maximum fractions: 11 numstr_zoned DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Maximum number of digits: 31 Maximum fractions: 11 ole_date ole_decimal ole_numeric ora_time DBTYPE_DBDATE DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_ DBTIMESTAMP SQL_DATE SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP OLE DB date format. OLE DB packed decimal. OLE DB numeric string. Oracle time format.

5-20 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type oracle time padded_str_ date padded_str_ datetime padded_str_ time phdate OLE DB Type DBTYPE_ DBTIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_DBDATE ODBC Type SQL_ TIMESTAMP SQL_CHAR SQL_CHAR SQL_CHAR SQL_Date JDBC Type Details Oracle time format. Padded date format. Not null terminated. Padded date format. Not null terminated. Padded date format. Not null terminated. Size: 2 bytes

Bits 0-6: (non-century) year Bits 7-10: number of month Bits 11-15: day of month

scaled_int1

DBTYPE_NUMERIC

SQL_NUMERIC

Signed byte integer. Size: 1 byte Range: -128 to +127 Maximum: 3

scaled_int2

DBTYPE_NUMERIC

SQL_NUMERIC

Signed word integer. Size: 2 bytes Range: -32768 to +32767 Maximum: 5

scaled_int3

DBTYPE_NUMERIC

SQL_NUMERIC

Signed integer. Size: 3 bytes

scaled_int4

DBTYPE_NUMERIC

SQL_NUMERIC

Signed long integer. Size: 4 bytes Range: -2147483648 to +2147483647 Maximum: 10

scaled_int6

DBTYPE_NUMERIC

SQL_NUMERIC

Signed integer. Size: 6 bytes

scaled_int8

DBTYPE_NUMERIC

SQL_NUMERIC

Signed quadword. Size: 8 bytes Range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 Maximum: 19

scaled_uint1

DBTYPE_NUMERIC

SQL_DOUBLE

Unsigned byte integer. Size: 1 byte Range: 0 to 254 Maximum: 3

Managing Metadata

5-21

Table 55 (Cont.) ADD Supported Data Types ADD Type scaled_uint2 OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_ NUMERIC(5) JDBC Type Details Unsigned word integer. Size: 2 bytes Range: 0 to 65534 Maximum: 5 scaled_uint4 DBTYPE_NUMERIC SQL_ NUMERIC(10) Unsigned long integer. Size: 4 bytes Range: 0 to 4,294,967,294 Maximum: 10 single DBTYPE_R4 SQL_REAL Single floating point number (F_FLOAT). Size: 4 bytes Range: 0.29E-38 to 1.7 E38 Precision: 6 digits str_date DBTYPE_STR SQL_CHAR Atomic date string. Size: 10 characters Format: YYYY-MM-DD str_datetime DBTYPE_STR SQL_CHAR Atomic date-time string. Size: 23 characters Format: YYYY-MM-DD HH:MM:SS.FFF str_time DBTYPE_STR SQL_CHAR Atomic time string. Size: 8 characters Format: HH:MM:SS string DBTYPE_STR SQL_CHAR String of alphanumeric characters; length must be specified. Date in a string. Format: YYYY-MM-DD tandem_ datetime DBTYPE_DBTIME SQL_TIME Date and time in a string. Format: YYYY-MM-DD:HH:MM:SS.FF FFFF DBTYPE_TIMESTAMP SQL_ TIMESTAMP SQL_CHAR SQL_ TIMESTAMP SQL_ TIMESTAMP Date and time in a string. Format: HH:MM:SS Text data (BLOB). ODBC time format. ODBC date-time format.

tandem_date

DBTYPE_DBDATE

SQL_DATE

tandem_time

text time timestamp

DBTYPE_STR DBTYPE_ DBTIMESTAMP DBTYPE_ DBTIMESTAMP

5-22 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type ubits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details An unsigned number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:

n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

The maximum number of bits you can map is 31. uint1 DBTYPE_UI1 SQL_TINYINT Unsigned byte integer. Size: 1 byte Range: 0 to +254 uint2 DBTYPE_I4 SQL_INTEGER Unsigned word integer. Size: 2 bytes Range: 0 to +65534 uint4 DBTYPE_NUMERIC SQL_ NUMERIC(11) Unsigned long integer. Size: 4 bytes Range: 0 to +4,294,967,294 uint6 DBTYPE_NUMERIC SQL_NUMERIC Unsigned integer. Size: 6 bytes unicode DBTYPE_WSTR SQL_VARCHAR A null-terminated alphanumeric unicode string; maximum length must be specified 16-bit count, followed by a string. 32-bit count, followed by a string OpenVMS date-time format

varstring varstring4 vms_date

DBTYPE_STR DBTYPE_STR DBTYPE_ DBTIMESTAMP

SQL_VARCHAR SQL_VARCHAR SQL_DATE

ADD Syntax
This section describes the Attunity Connect Data Dictionary. The following statements are described:

The <table> Statement The <dbCommand> Statement

Managing Metadata

5-23

The <fields> Statement The <field> Statement The <group> Statement The <variant> Statement The <case> Statement The <keys> Statement The <key> Statement The <segments> Statement The <segment> Statement The <foreignKeys> Statement The <foreignKey> Statement The <primaryKey> Statement The <pKeySegments> Statement

The <table> Statement


The <table> statement describes the general record or table attributes.

Syntax
<table name="table_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... > <dbCommand>...</dbCommand> </field> ... </fields> <keys> <key name="param"> attribute="value" ...> <segments> <segment name="param" attribute="value" ... /> ... </segments> </key> ... </keys> </table>

where table_name is the record/table name. It can be made of a maximum of 40 characters. The <table> statement consists of the following components:

Table Attributes The <fields> Statement. This statement includes the fields list. Optionally, The <keys> Statement, which contains a list of keys.

5-24 AIS User Guide and Reference

Notes:

The table_name entry must conform to standard ANSI 92 SQL naming conventions. When you define the structure of a table for a non-relational data source, you must include a <fields> statement. When both <keys> and <fields> statements are present, the <keys> statement must come after the <fields> statement.

Example 59 <table> statement syntax <table name="nation" organization="index" filename="d:\demo\nation" datasource="DEMO"> <fields> <field name="n_nationkey" datatype="int4" /> <field name="n_name" datatype="string" size="25" /> <field name="n_regionkey" datatype="int4" /> <field name="n_comment" datatype="cstring" size="152" /> </fields> </table>

Table Attributes
A table can have the following attributes:

alias: Replaces the table name with a logical name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"

basedOn: Specifies the table (or virtual table) on which the current table is based. This attribute is generated automatically when an array in a record is generated as a virtual table. Syntax:
basedOn="table_name::array_name"

where: * * table_name: The name of the table which contains the array. If the array is nested in another array, this value is the name of the parent array. array_name: The name of the array in the table.

Example:
<table name="EMP_CHLDRN" organization="index" basedOn="EMPLOYEE::CHILDREN" datasource="DEMO" />

Also refer to counterName and dimension1 in The <group> Statement.

datasource: Specifies the data source name as specified in the binding configuration. The repository for this data source is used to store the ADD information. This attribute must be specified. Syntax:

Managing Metadata

5-25

datasource="datasource_name"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />

delimited: Specifies the character that delimits fields. In order to get the delimiter character into the data you must have the entire field quoted. See quoteChar further in this section. If you do not specify this attribute, ADD assumes that a comma (,) functions as the delimiting character. Syntax:
delimited="character"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" delimited="/" datasource="DEMO" />

description: Specifies an optional textual description. Syntax:


description="optioanl_user_supplies_description"

filename: The filename attribute specifies the full name and location of the file. Syntax:
filename="full_filename"

where full_filename includes the full path to the file.


Note:

Data source drivers require the file suffix, except for the CISAM and DISAM drivers, where the suffix must not be specified.

Flat files and text delimited on z/OS Platforms When defining metadata for a flat file or text delimited file, use the following syntax: filename="high_level_qualifier.filename" For example: filename="SYS1.AAA.AC.DATATEMP"

Example (RMS):
filename="DISK$2:[DB]NATION.DAT"

Example (DISAM):
filename="d:\demo\nation"

5-26 AIS User Guide and Reference

filter: Adds a WHERE clause to every query accessed using this table or procedure. This attribute is useful when more than one logical table is stored in the same physical file. If a query relates to data with a filter attribute defined, then the Attunity Connect Query Processor handles the query including the WHERE clause and will return an error if the query is invalid because of the added WHERE clause. To use the filter attribute, you must set the useTableFilterExpressions environment property to true. You specify this parameter in the queryProcessor node of the environment properties for the relevant binding configuration, in Attunity Studio Design perspective Configuration view. Syntax:
filter="sql_expression"

where sql_expression is a valid SQL expression combining one or more constants, literals and column names connected by operators. Column names must be prefixed with $$$ and the column must exist in the current table. Example:
<table name="nation" filename="d:\demo\nation" organization="index" filter="$$$.RECORD_TYPE = 80" datasource="DEMO" />

nBlocks: Specifies the approximate number of blocks in the table. It is used by Attunity Connect to optimize query execution.
Note: The nRows attribute must be specified if the nBlocks attribute is specified. If neither nRows nor nBlocks is specified for a table, queries over the table might be executed in a non-optimal manner.

Syntax:
nRows="numeral"

organization: Specifies the organization of a file system data provider. The organization can be one of the following: indexed, sequential, relative, or unstructured. The default is sequential. Syntax:
organization="index" | "sequential" | "relative" | "unstructured"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />

Note:

Attunitys test-delimited and the flat file drivers, both support a sequential organization.

Managing Metadata

5-27

Use unstructured for unstructured Enscribe files that are not indexed. Note that you must include a filler field, of size one, when having an odd record size in an even unstructured Enscribe file. Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The hash symbol (#) is used to specify a pseudo column. For example:

SELECT * FROM colleges WHERE # = 6 INSERT INTO colleges(coll_id,coll_name,coll_status,#) VALUES(111,New collage,2,5) DELETE FROM colleges WHERE # = 15

quoteChar: Specifies the character that quotes a string field. In order to quote a field the entire field must be quoted (leading and trailing white space before and after the start and end quote characters, respectively, are allowed). In particular you cannot start or end quoting data in the middle of a field since the quote characters will be interpreted as part of the data. In order to have a quote character in a quoted field you must escape it by preceding it with a backslash. Syntax:
quoteChar="character"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" quoteChar="" datasource="DEMO" />

recordFormat: Used only with RMS data, to identify the underlying RMS record format. This attribute is for information purposes only. Within Attunity Connect, all records are treated as fixed length. Syntax:
recordFormat="undefined" | "fixed" | "variable"

Example:
<table name="nation" datasource="RMS" filename="DKA100:[USER.NAV.RMSDEMO]nation.INX" recordFormat="fixed" organization="index" />

size: Specifies the maximum size, in bytes, of a record. This attribute is useful when you only want to use part of the record. This attribute is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, do not specify a size attribute. This attribute is not supported by the flat files driver. Syntax:
size="n"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" size="500" datasource="DEMO" />

5-28 AIS User Guide and Reference

tableId: The record number within a DBMS user work area. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only)) generates ADD metadata from DBMS and automatically creates a tableId attribute.
Note:

Do not change this attribute.

Syntax:
tableId="record_number"

The <dbCommand> Statement


The <dbCommand> statement is used to specify data source-specific commands for the metadata.

Syntax
<dbCommand>text</dbCommand>

Examples
<dbCommand>CMD=^PAK(K1,1)</dbCommand>

The BASIC_ADL utility for generating ADD metadata from DBMS metadata creates a dbCommand statement. For example, for a field, the dbCommand has the following form:
<dbCommand> field-type/set-name/record-name-of-paired-table/ realm-of-paired-table/ insertion-mode/retention-mode </dbCommand>

For a part of the <key> statement:


<key unique="true"> <dbCommand>{ AC }</dbcommand> <segments> <segment name="EMPID-2-4" /> <segment name="EMPNAME-3-8" /> </segments> </key>

The <fields> Statement


The <fields> statement is used to list the field descriptions of fields in a table or procedure. A field description can be one of the following:

The <field> Statement The <group> Statement The <variant> Statement

Managing Metadata

5-29

Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>

The <field> Statement


The <field> statement defines the characteristics of a field that is not made up of other fields.

Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>

Example
The following code defines one field (n_NAME) and its two attributes (datatype and size):
<field name="n_name" datatype="string" size="25" />

Field Attributes
A field can have the following attributes:

autoIncrement: When set to true, specifies that this field is updated automatically by the data source during an INSERT statement and shouldnt be explicitly specified in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremented each time a new order is entered to the data source. Syntax:

5-30 AIS User Guide and Reference

autoIncrement="true|false"

Example:
<field name="ORDER_NUM" datatype="string" size="6" autoIncrement="true" />

chapterOf: Used for DBMS metadata and specifies that the set member field is a chapter of an owner table. This attribute must be included when accessing a set member as a chapter in an ADO application. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only) generates ADD metadata from DBMS and automatically creates this attribute. Syntax:
chapterOf="owner_table"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" chapterOf="CLASS" nullable="true"/

compressedArray: When set to true, any array that has a counter can be marked with the attribute compressedArray. Compressed arrays can be groups or single fields and they store only the members that have a value. Thus, a compressed array with a maximum of 100 elements that has only 5 elements in a particular record will include only the 5 elements in the physical file. Records are compressed and decompressed for READ and WRITE operations.
Note:

Only arrays with counters can be compressed.

Syntax:
compressedArray="true|false"

datatype: Specifies the data type of a field. Syntax:


datatype="datatype"

See ADD Supported Data Types.

emptyValue: Specifies the value for the field during an insert operation, when a value is not specified. Syntax:
emptyValue="value"

Example:
<field name="RECORD_TYPE" emptyValue="80" datatype="int4" />

explicitSelect: When set to true, specifies that this field is not returned when you execute a "SELECT * FROM..." statement. To return this field, you must explicitly ask for it (in a query such as "SELECT NATION_ID, SYSKEY FROM NATION" where SYSKEY is a field defined with explicitSelect). Syntax:
Managing Metadata 5-31

explicitSelect="true|false"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />

Note:

You can disable this attribute by specifying the disableExplicitSelect attribute for the data source in the binding.

hidden: When set to true, specifies that the field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. Syntax:
hidden="true|false"

Example:
<field name="CURRENT_SALARY" hidden="true" datatype="decimal" size="9" />

name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"

Example:
<field name="EMP_ID" datatype="int4" />

nonSelectable: When set to true, specifies that the field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. Syntax:
nonSelectable="true|false"

Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonSelectable="true" />

nonUpdateable: When set to true, specifies that the field cannot be updated (the default is false). Syntax:
nonUpdateable="true|false"

Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonupdateable="true" />

nRows: Specifies the approximate count of distinct column values in the table. It is used by Attunity Connect to optimize the query execution.

5-32 AIS User Guide and Reference

Syntax:
nRows="numeral"

nullable: When set to true, specifies that the field can contain NULL values. Syntax:
nullable="true|false"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key or segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, then the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is set to true, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field= NULL

Specifying "WHERE field=0" will return an incorrect result.


Note: This attribute is supported for Adabas databases and OLE DB providers. For Adabas, the attribute is set at run-time according to the value returned by the ADABAS LF command.

Syntax:
nullSuppressed="true|false"

Example:
<field name="AGE" nullSuppressed="true" datatype="int4" />

nullValue: Specifies the null value for the field where the data source does not support null values, thereby providing a means of assigning a "null" value. A select statement returns the value as NULL. Syntax:
nullValue="value"

where value is a string value. Example:


<field name="RETIRED" nullable="false" nullValue="-1" datatype="int4" />

Managing Metadata

5-33

offset: Specifies an absolute offset for the field in a record. When used with a field whose data type is BIT, the offset can be stated for the first BIT data type. All following BIT data types refers to the same offset if possible. When the last mapped bit is the 8th bit, the next bit is mapped to the first bit in the next byte. Syntax:
offset="n"

Example:
<field name="EMP_ID" offset="3" datatype="int1"/> <field name="CHECK_DIGIT1" offset="3" datatype="bit" onBit="1" /> <field name="CHECK_DIGIT2" datatype="bit"/>

onBit: Specifies the position of the bit in a field with data type BIT or the starting position in a field with data type BITS. Syntax:
onBit="n"

where: * * For the BIT data type: Specifies which bit the field uses. For the BITS data type: Specifies which bit (within a byte) to start from. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

scale: Specifies the number of characters of digits. Syntax:


scale="n"

where: * For decimal and numeric data types: The number of digits that are fractions. The number of fractions must not be greater than the number of digits. The default value is 0. For scaled data types: The number of digits. The number must be negative.

Example:
<field name="SALARY" datatype="numstr_s" size="10" scale="2" />

size: Specifies the size of the field. Syntax:


size="n"

where n is the number of characters or digits. The digit must be greater than 0. For the BITS data type, n specifies the number of used bits, starting from the value specified in the onBits attribute.

subfieldOf: The value for this attribute is generated automatically when you generate ADD metadata from ADABAS data that includes a superdescriptor based on a subfield. A field is created to base this index on, and set to the offset specified as the value of the subfieldStart attribute.

5-34 AIS User Guide and Reference

If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldOf="parent_field"

subfieldStart: The offset within the parent field where a subfield starts. If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldStart="offset_number"

The <group> Statement


The <group> statement defines the characteristics of a field that is made up of other fields, such as an array in a record.

Syntax
<group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>

A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.

Example
An array containing information about an employees children can be defined as follows:
<group name="CHILDREN" dimension1="4" > <fields> <field name="DATE_OF_BIRTH" datatype="vms_date" /> <field name="NAME" datatype="string" size="16" /> </fields> </group>

The CHILDREN structure has 4 occurrences numbered from 0 to 3. Each occurrence consists of two fields, DATE_OF_BIRTH and CHILD_NAME. The size of the single structure occurrence is 20 (4 bytes for DATE_OF_BIRTH and 16 bytes for CHILD_ NAME), and the total size of the CHILDREN array is therefore 80.

Group Attributes
A group can have the following attributes:

alias: Used to replace the default virtual table name automatically generated for an array. Virtual table names are generated by appending the array name to the parent name (either the record name or a parent array name). Thus, when an array includes another array the name of the nested array is the name of the record and the parent array and the nested array.

Managing Metadata

5-35

When the default generated virtual table name is too long to be usable or over 39 characters, specify an alias to replace the long name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"

Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> ... </group>

counterName: Specifies the name of a field that counts the number of the elements stored in an array. Syntax:
counterName="field_name"

where field_name is the name of a field that counts the number of elements stored in the array. The counterName attribute cannot be used with an Attunity Connect procedure.
Note:

For an ADABAS database, you dont need to define a counter field since one is created automatically with the name C-arrayname, where arrayname is the multiple value field name or periodic group field name in ADABAS.

Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\ddescription: isam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>

description: Specifies an optional textual description. Syntax:


description="optional_user_supplied_description"

dimension1: Specifies that the field is an array.

5-36 AIS User Guide and Reference

Syntax:
dimension1="n"

where n indicates the number of elements in the array. The dimension1 attribute cannot be used with an Attunity Connect procedure.
Note:

This syntax is for a one-dimensional array. For a two-dimensional array, you specify dimension2="n".

Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\disam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>

To create the EMP_CHLDRN child table, import the table metadata to the Attunity Connect repository (by right-clicking the data source in Attunity Studio Design perspective Metadata view and selecting Import XML definitions).

name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"

Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> ... </fields> </group>

offset: Specifies an absolute offset for the group. Syntax:


offset="offset"

Managing Metadata

5-37

The <variant> Statement


Variants are similar to redefine constructs in COBOL, and to union in C. The basic concept is that the same physical area in the buffer is mapped several times.The mappings can be of the following:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. The following variant types are available:

Variant without selector Variant with selector

Variant without selector


Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data. Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data.

COBOL Example:
20 PARTNUM PIC X(10). 20 PARTCD REDEFINES PARTNUM. 30 DEPTCODE PIC X(2). 30 SUPPLYCODE PIC X(3). 30 PARTCODE PIC X(5).

In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation Panel enables replacing any variant with the fields of a single case. The metadata generated by Studio following a Metadata import appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant> 5-38 AIS User Guide and Reference

Variant with selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.

COBOL Example:
10 ORDER. 20 RECTYPE PIC X. 88 ORD-HEADER VALUE H. 88 ORD-DETAILS VALUE D. 20 ORDER-HEADER. 30 ORDER-DATE PIC 9(8). 30 CUST-ID PIC 9(9). 20 ORDER-DETAILS REDEFINES ORDER-HEADER. 30 PART-NO PIC 9(9). 30 QUANTITY PIC 9(9) COMP.

In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction. This is true, unless a selector is specified in the import manipulation panel. See Working with Metadata in Attunity Studio for additional information.

ADD Syntax: The following is the ADD syntax used for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>

The metadata generated by Attunity Studio following a metadata import appears as follows:
<field name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9"/> </fields>

Managing Metadata

5-39

</case> <case name="ORDER_DETAILS" value="D"> <fields> <field name="PART_NO" datatype="numstr_u" size="9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields> </case> </variant>

Usage Notes

From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the user to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.

Resolving Variants in Attunity Studio


In Attunity Studio variants are resolved in the data source metadata Import Manipulation panel. To resolve variants in the Import Mani8pulation panel 1. In the Validation tab, double-click the variant to resolve. The variants in the COBOL copybook are displayed (you can expand the variants to expose the variant cases).
2.

Right-click the variant and select Structures, and then Mark selector. The Select Selector Field screen opens.

3. 4. 5. 6.

Select the selector for the variant from the list of selectors in the COBOL copybook. Click OK. Repeat as needed to set variants with selectors. Click OK.

Variant Attributes
A variant can have the following attributes:

name: Specifies the name of the variant. This attribute must be specified. Syntax:
name="name"

Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields>

5-40 AIS User Guide and Reference

</case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

selector: Specifies the name of a field whose value determines which of the alternate variant definitions is used in the current record (row). When a selector attribute is specified, a value attribute must be specified in the <case> statement. Syntax:
selector="field_name"

Example:
<field name="SEX" datatype="string" size="1" /> <variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

The <case> Statement


The <case> statement specifies an alternative definition that maps to the same storage area. The <case> statement can include:

The <field> Statement The <group> Statement The <variant> Statement

Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>

Case Attributes
A case can have the following attributes:

Managing Metadata

5-41

name: Specifies a name for the case. When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here. Syntax:
name="name"

Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

value: Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. When a selector attribute is specified in the <variant> statement, a value attribute must be specified here. Syntax:
value="value"

Example:
<case name="CASE_1_1" value="M"> <fields> <field name="M_SCHL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHL" datatype="string" size="20" /> </fields> </case>

The <keys> Statement


The <keys> section of a table definition describes the keys of the table. A list of key description statements is included in the <keys> statement.

Syntax
<keys> <key name="key_name" attribute="value" ...> <segments> 5-42 AIS User Guide and Reference

<segment name="segment_name" attribute="value" ... /> ... </segments> </key> <key name="key_name" ...> <segments> <segment name="segment_name" ... /> ... </segments> </key> </keys>

Example
<keys> <key name="nindex" unique="true"> <segments> <segment name="n_nationkey" /> </segments> </key> </keys>

The <key> Statement


The <key> statement describes a key of the table. An optional list of key attributes and a list of segment statements are included in the <key> statement.

Syntax
<key name="key_name" attribute="value" ...> <dbCommand>...</dbCommand> <segments> <segment name="segment_name" attribute="value" ... /> ... </segments> </key>

Key Attributes
A key can have the following attributes:

bestUnique: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute on keys containing a field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Fields that represent a bookmark include ROWID in Oracle, DBKEY in DBMS and ISN in ADABAS. Syntax:
bestUnique="true|false"

Example:
<key unique="true" bestUnique="true" hashed="true"> <segments>

Managing Metadata

5-43

<segment name="ISN" /> </segments> </key>

clustered: When set to true, indicates that this key reflects the physical organization of the table and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
clustered="true|false"

Example:
<key clustered="true"> <segments> <segment name="DBKEY" /> </segments> </key>

descending: When set to true, specifies whether the order of the current key is descending. If this attribute is not specified for the key, it can be specified per segment of the key. The default is ASCENDING. Syntax:
descending="true|false"

Example:
<key unique="true" descending="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" /> </segments> </key>

description: Specifies an optional textual description. Syntax:


description="optional_user_supplied_description"

hashed: When set to true, indicates that this is a hash key and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
hashed="true|false"

Example:
<key unique="true" bestUnique="true" hashed="true"> <segments> <segment name="ISN" /> </segments> </key>

hierarchical: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute for DBMS databases, on keys containing a DBKEY field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Syntax:

5-44 AIS User Guide and Reference

hierarchical="true|false"

Example:
<key unique="true" hierarchical="true" > <segments> <segment name="DBKEY" /> </segments> </key>

indexId: Identifies the physical key for the record. You can use this attribute only with the key and not with a segment. The use of the field is data source dependent. Syntax:
indexId="previously_defined_key"

For an Enscribe alternate key, the indexId attribute is the ASCII value corresponding to the 2 bytes of the key specifier surrounded by quotes. For details, see Enscribe Data Source (HP NonStop Only).

nRows: Specifies the approximate count of distinct key values in the key. It is used by Attunity Connect to optimize query execution. For a unique key, the nRows value must be equal to the nRows value for the record (that is, the number of distinct key values is the same as the number of rows). Syntax:
nRows="numeral"

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is specified, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field=NULL

Specifying "WHERE field=0" will return an incorrect result.


Note: This attribute is supported for Adabas databases and OLE DB providers. For Adabas, the attribute is set at run-time according to the value returned by the Adabas LF command.

unique: When set to true, indicates that each key entry uniquely identifies one row and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
unique="true|false"

Example:
Managing Metadata 5-45

<key unique="true" bestUnique="true"> <segments> <segment name="ISN" /> </segments> </key>

The <segments> Statement


The <segments> section of a table definition describes the segments of the key. A list of <segment> statements is included in the <segments> statement.

Syntax
<segments> <segment name="segment_name" attribute="value" .../> ... </segments>

The <segment> Statement


The <segment> statement describes a segment of the key. An optional list of segment attributes are included in the <segment> statement.

Syntax
<segment name="segment_name" attribute="value" ... <dbCommand>...</dbCommand> </segment>

Segment Attributes
Each segment can have the following attributes:

descending: When set to true, specifies the order of the current segment is descending. The default is ASCENDING. Syntax:
descending="true|false"

Example:
<key unique="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" descending="true" /> </segments> </key>

name: Specifies the name of the segment. This attribute must be specified. Syntax:
name="name"

Example:
<key unique="true" nRows="30"> <segments>

5-46 AIS User Guide and Reference

<segment name="EMPLOYEE_ID" /> </segments> </key>

nRows: Specifies the approximate count of distinct segment values in the key. It is used by Attunity Connect to optimize query execution. Syntax:
nRows="numeral"

nullsLast: When set to true, causes value comparison operations to treat nulls as the greater of the values being compared. Syntax:
nullLast="true|false"

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). Syntax:
nullSuppressed="true|false"

The <foreignKeys> Statement


The <foreignKeys> statement defines the foreign keys of the table. A list of foreign key description statements is included in the <foreignKeys> statement.
Note:

The <foreignKeys> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<foreignKeys> <foreignKey name="key_name" attribute="value" ...> <fkeySegments> <fkeySegment attribute="value" ... /> </fkeySegments> </foreignKey> </foreignKeys>

The <foreignKey> Statement


The <foreignKey> statement describes a foreign key of a table. The <foreignKey> statement defines the following:

The name of the foreign key. The external table reference by the foreign key. The primary key of the referencing table. The segments of the foreign key. The referential integrity rule to be implemented when the primary key field in the referencing table is either updated or deleted.
Managing Metadata 5-47

Note:

The <foreignKey> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<foreignKey name="key_name" referencingTable="external_table" referencingPKey="external_table_ primary_key" updateRule="cascade|restrict|setNull"> deleteRule="cascade|restrict|setNull"> <fkeySegments> <fkeySegment fname="local_table_field" pName="referenced_table_field"/> ... </fkeySegments> </foreignKey>

Example
<foreignKeys> <foreignKey name="fkey1" referencingTable="table2" referencingPKey="table2_pkey" updateRule="cascade" deleteRule="setNull"> <fKeySegments> <fKeySegment fName="col2" pName="table2_col3" /> <fKeySegment fName="col5" pName="table2_col2" /> </fKeySegments> </foreign_key> <foreignKey name="fkey2" referencingTable="table3" referencingPKey="table3_pkey" updateRule="restrict" deleteRule="restrict"> <fKeySegments> <fKeySegment fName="col1" pName="table3_col1" /> </fKeySegments> </foreignKey> </foreignKeys>

foreignKey Attributes
Each foreignKey can have the following attributes:

name: The name of the foreign key. referencingTable: The name of the external table referenced by the foreign key. referencingKey: The primary key of the external referencing table. updateRule: The referential integrity rule for the foreign key when the primary key of the referenced table is updated. The specific option is determined by the application accessing the data: cascade restrict setNull

For a general description, see Referential Integrity.


5-48 AIS User Guide and Reference

deleteRule: The referential integrity rule for the foreign key when the primary row of the referenced table is deleted. The specific option is determined by the application accessing the data: cascade restrict setNull

fName: The name of the external field referenced in this foreign key segment. pName: The name of the local field referenced in this foreign key segment.

The <primaryKey> Statement


The <primaryKey> statement describes the primary key of a table. The <primaryKey> statement uses <pKeySegments> and <pKeySegment> statements to define the segments constituting the primary key.
Note:

The <primaryKeys> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<primaryKey name="name"> <pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments> </primaryKey>

The <pKeySegments> Statement


The <pKeySegment> statements define the segments that make up the primary key of a table.
Note:

The <pKeySegment> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments>

Example
<primaryKey name="pk"> <pKeySegments> <pKeySegment segment="col1"> <pKeySegment segment="col2"> </pKeySegments> </primaryKey>

Managing Metadata

5-49

pKeySegment Attributes
The pKeySegment statement can have the segment attribute, which specifies the name of a segment constituting the primary key of a table.

5-50 AIS User Guide and Reference

6
Working with Metadata in Attunity Studio
This chapter includes the following sections:

Overview Managing Data Source Metadata Importing Data Source Metadata with the Attunity Import Wizard Working with Application Adapter Metadata

Overview
AIS uses metadata to access and read a data source. AIS can use the native metadata for many data sources, such as Relational Data Sources or Adabas. However, some data sources require Attunitys own metadata (ADD). Metadata can be imported, saved, and managed in the Attunity Studio Design perspective, on the Metadata tab. The following data sources require Attunity metadata:

CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) Text Delimited File Data Source VSAM Data Source (z/OS) OLEDB-FS (Flat File System) Data Source

An Adabas Driver is available if the Adabas Predict metadata is not available.

Managing Data Source Metadata


You manage data source metadata in the Attunity Studio Design perspective, in the Metadata editor. The Metadata editor is used to display the Metadata for any data source. Metadata for data sources where AIS uses the native metadata is displayed as readonly in the metadata editor. The editor lets you view and modify Attunity Metadata for Data Sources that require Attunity metadata.

Working with Metadata in Attunity Studio

6-1

To view and edit data source metadata 1. In the Design perspective Configuration view, expand the Machines folder and then expand the machine where you want to add the data source.
2. 3. 4.

Expand the Bindings folder and then expand the binding with the data source metadata you are working with. Expand the Data Source folder. Right-click the data source for which you want to manage the metadata and select Show Metadata View. The Metadata tab opens with the selected data source displayed in the Configuration view.

5.

Right-click the resource (such as the data source table) in the Metadata view and select Edit.

Data source tables are edited using the following tabs, which are at the bottom of the editor screen:

General Tab: Enter general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Edit information about the table columns and their properties. For example, the column data type, size, and scale. Indexes Tab: Edit information about the indexes of a table. The indexes are described by the order of the rows they retrieve, the data source commands used, and the index type. Statistics Tab: Enter the statistics for the table, including the number of rows and blocks in the table. Source Tab: View the metadata in its XML representation.
Note:

Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources (File-system Data Source). For example, the metadata for an RMS record is referred to as the metadata for an RMS table.

General Tab
The General tab lets you maintain information about the table, such as the table name and the way the table is organized.

6-2 AIS User Guide and Reference

Figure 61 The Data Source Metadata General Tab

Note:

The fields displayed depend on the type of data source used.

The table below describes the parts on this tab.


Table 61 Field name Description Table Properties Metadata General Tab Description Enter a description of the table. These fields provide general configuration information for the metadata. The fields displayed depend on the data source and table attributes of the selected table. The name of the file that contains the table. You must enter the full path and include the file extension for the file. For example, D:\COBOL\orders.cob. You can click Browse and browse to find and enter the location of the table file. Note: Do not enter the file extension for DIASM or CIASM files. Organization Select how the record is organized:

Data file location

Index: Select this if the source data has an index column. All searches are according to the index column. Sequential: Select this if the source does not have a key and all requests search the table columns sequentially. Relative: Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The (#) symbol indicates a pseudo column.

Working with Metadata in Attunity Studio

6-3

Table 61 (Cont.) Metadata General Tab Field name Maximum record length Description The maximum size of the record in bytes. The value for this field is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, leave this field empty. Delimited Quote Character Record Format Enter a character to act as a delimiter in the metadata. If nothing is entered, a comma (,) is used as the delimiter. The character that quotes a string field. Select how the record is formatted according to its size.

Undefined: There is no set definition for the record size. Fixed: The record length is fixed. Variable: The record length varies.

DB Command Filter expression

Enter special commands for the data source you are working with. You can create a filter by entering a WHERE clause. You should use a filter when more than one logical table is stored in the same physical file. To enter a filer, click Set filter expression and enter an expression in the field below. The following is an example of the WHERE clause syntax: "$$$.expression" Note: To use a filter, you must select use Table Filter Expressions in the Query Processorsection of the Environment Properties in Attunity Studio.

Columns Tab
The Columns tab lets you specify ADD metadata that describes the table columns. This tab is divided into the following:

Column Definition Section Column Properties

6-4 AIS User Guide and Reference

Figure 62 Data Source Metadata Columns Tab

Column Definition Section


The top section of this tab lets you define the columns in the source data. You can click in any row (which represents a column in the data base table) to edit the information. The following table describes this section.
Table 62 Field name Name Data type Metadata Column Tab Definitions Description The name of the column The data type of the column. Selecting this field displays a drop-down box listing the possible data types. For details about these data types, see ADD Supported Data Types. The size of the column The information entered in this field depends on the data type: For decimal data types, this is the number of digits to the right of the decimal place. This number must not be greater than the number of digits. The default value is 0. For scaled data types, this is the total number of digits. The number must be negative.

Size Scale

Working with Metadata in Attunity Studio

6-5

Table 62 (Cont.) Metadata Column Tab Definitions Field name Dimension Description The maximum number of occurrences of a group of columns that make up an array. The (+) to the left of a column indicates a group field. This type of field will have a Dimension value. Click (+) to display the group members. Offset Fixed offset An absolute offset for the field in a record. This column lets you determine whether to calculate the offset. There are two options:

Calc offset: If you clear this check box, the absolute offset for each of the columns is calculated. Fixed offset: When you select this check box, you will have a fixed offset. The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select the check box in this column to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. By selecting the check box, or by editing the offset value you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

Primary Key

Select this to indicate that this column is a primary key.

The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 63 Button Insert Up Down Rename Delete Find Definition Section Buttons Description Inserts a column to the table. You can insert a new column. If the table has arrays, you can add a new child column. Moves your selection to the column directly above where the currently selected column. Moves your selection to the column directly below where the currently selected column Lets you rename the selected column. Deletes the selected column. Click this button to open a list of all columns in the database. Select a column and click OK to select it in the table.

Column Properties
You can change the property value by clicking in the Value column. Follow these steps for displaying the column properties. To display the column properties Select a column from the Column Definition (top) section. The properties for the column are displayed at the bottom of the tab.

6-6 AIS User Guide and Reference

The following table shows some of the properties available for selected columns.
Table 64 Property Alias Metadata Properties Description A name used to replace the default virtual table name for an array. Virtual table names are created by adding the array name to the record name. When an array includes another array the name of the nested array is the name of the record and the parent array and the nested array. When the default generated virtual table name is too long, use an Alias to replace the long name. The current field is updated automatically by the data source during an INSERT statement and is not explicitly defined in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremental each time a new order is entered to the data source. A short note or description about the column. Click the button to enter data source-specific commands for the column. The value for the field in an insert operation, when a value is not specified. When true, the current field is not returned when you execute a SELECT * FROM... statement. To return this field, you must explicitly ask for it in a query, for example, SELECT NATION_ ID, SYSKEY FROM NATION where SYSKEY is a field defined with explicitSelect. You cannot use an asterisk (*) in a query where you want to retrieve a field defined with the explicitSelect value. You can disable this value by entering the disableExplicitSelect value in the data source bindings Environment Properties. Hidden Non Selectable The current field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. When true, the current field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. If true, the current field cannot be updated. This value allows the current field to contain NULL values. The null value for the field during an insert operation, when a value is not specified. This property shows that the set member field is a chapter of an owner field. A value for this property must be used when accessing a set member as a chapter in an ADO application. This property is used for DBMS metadata OnBit Subfield of The position of the bit in a BIT field and the starting bit in a BITS field. The value is generated automatically when you generate ADD metadata from Adabas data that includes a superdescriptor based on a subfield. A field is created to base this index on, set to the offset specified as the value of the Subfield start field. If no value is entered in the Subfield start field, the subfield is set by default to an offset of 1.

Autoincrement

Comment DB command Empty value Explicit Select

Non Updateable Nullable Null value Chapter of

Working with Metadata in Attunity Studio

6-7

Table 64 (Cont.) Metadata Properties Property Subfield start Description The offset within the parent field where a subfield starts.

Indexes Tab
The Indexes tab lets you indicate ADD metadata describing the table indexes.
Note: This tab contains information only if the Organization field in the General tab is set to Index.
Figure 63 Data Source Metadata Indexes Tab

This tab has two sections. The first section lets you define the index keys for the columns in the table. The bottom of the tab lists the properties for each of the columns at the top.

Table Information
The following table describes the fields for the top part of the tab, which defines the indexes used for the table.
Table 65 Field Name Order Index Definitions Fields Description The name of the column used as an index for the current table. The row order that the index gets back.

6-8 AIS User Guide and Reference

Table 65 (Cont.) Index Definitions Fields Field DB Command Description Data source-specific commands for the index.

The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 66 Button Insert Rename Index Delete Index Definition Buttons Description Inserts an index to the table. Lets you rename the selected index. Deletes the selected index.

Properties
You can index properties for each index column. Follow these steps for displaying the properties for each index. To display the index properties Select a column from the Index Definitions (top) section. The properties for the column are displayed at the bottom of the tab. This properties displayed at the bottom of the tab describe the index or segment. The properties available depend on the data source.

Statistics Tab
The Statistics tab lets you specify metadata statistics for a table. Statistics can be updated with the Update Statistics utility.

Working with Metadata in Attunity Studio

6-9

Figure 64 Data Source Metadata Statistics Tab

This tab is divided into three sections:


Table Columns Indexes

Table
Enter the statistical information for the table in this section.

Rows: Enter or use the arrows to select the approximate number of rows in the table. If the value is -1, the number of rows in the table is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that this table is empty. Blocks: Enter or use the arrow to select the approximate number of blocks in the table.
Note:

If no value is entered for the number of rows or the number of blocks, queries over the table may not be executed effectively.

Columns
Enter the cardinality for each of the columns in the table in this section.

Column Name: The columns in the table. Cardinality: The number of distinct values for the column. If the value is -1, the number of distinct values for the column is unknown (a value was not supplied

6-10 AIS User Guide and Reference

and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct values for the column.

Indexes
Enter the cardinality for the columns in each of the tables indexes in this section.

Indexes and Segments: The indexes and segments in the table. Cardinality: The number of distinct key values in the index. If the value is -1, the number of distinct key values in the index is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct key values in the index.

Update Button
Update: Opens the Update Statistics window. AIS collects information about tables, indexes, and column cardinalities. These statistics can be used to optimize a query using the Query Optimizer, which finds the most efficient way to perform a query across multiple machines. This is done using the metadata statistics.
Figure 65 Metadata Statistic Update Window

where:

Type: The type of statistic information being added. Estimated: An estimation of the amount of statistical information returned. Estimated with Rows: An estimation of the amount of statistical information returned. Including an estimation of the number of rows in the table. Specify the number in the text box. This number is used to shorten the time to produce the statistics, assuming that the value specified here is the correct value, or close to the correct value.

Working with Metadata in Attunity Studio 6-11

Note:

When the number of rows in the table is not provided, the number of rows used is determined as the maximum value between the value specified in the tuning DsmMaxBufferSize environment property and the value set in the nRows attribute (specified as part of the metadata for the data source)

Exact: The exact statistical information returned. Note that this can be a lengthy task and can lead to disk space problems with large tables.

Resolution: The level of the statistical information returned. Default: Only information about the table and indexes is collected. Information for partial indexes and columns is not collected. All Columns and Indexes: Information about the table, indexes, partial indexes and columns is collected. Select Columns and Indexes: Lets you select the columns and indexes you want to collect statistics for. In the enabled list of columns and/or indexes left click those columns you want included (you can use shift-click and control-click to select a number of columns and/or indexes).

The statistics are updated on the server, enabling work to continue in Attunity Studio. A message is displayed in Attunity Studio when the statistics on the server have been updated.

Modelling Tab
The Modelling tab lets you enter information about the virtual view policy for arrays. These parameters are valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.

6-12 AIS User Guide and Reference

The configurations made in this editor are for the selected table, only. The same parameters are configured on the data source level in the data source editor.
Figure 66 Data Source Metadata Advanced Tab

Enter the following information in this tab:


Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter.

For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns.

Working with Metadata in Attunity Studio 6-13

false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter.

For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Importing Data Source Metadata with the Attunity Import Wizard


This section describes how to use the Import wizard for a basic Metadata import. You can use this wizard with a non-relational data source with its own metadata. The metadata is generated from metadata files, such as COBOL copybooks. If no COBOL copybook is available, the metadata has to be manually defined in the Attunity Studio Design perspective Metadata editor. For more information, see Managing Data Source Metadata. A basic COBOL Import wizard can be used with most of the Attunity data sources. The following data source drivers have wizards specific to that data source for importing Metadata. This wizard is similar to the basic COBOL wizard, with special steps necessary for the specific data source.

Adabas C Data Source Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS)

For Importing Procedure Metadata, an import wizard is available for these procedure data sources:

CICS Procedure Data Source


Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata

Starting the Import Process


This section describes the steps required to begin importing metadata from data sources that must use Attunity metadata, such as COBOL copybooks. This metadata is used to generate the Attunity metadata. To begin the import process 1. Open Attunity Studio.
2.

In the Design perspective Configuration view, expand the Machines folder and then expand the machine with the data source where you are importing the metadata. Expand the Bindings folder and expand the binding with the data source metadata you are working with.

3.

6-14 AIS User Guide and Reference

4. 5.

Expand the Data Source folder and. Right-click the data source that you are working with and select Show Metadata View. The Metadata view opens with the selected data source displayed.

6.

Right-click Imports under the data source and select New Import. The New Import dialog box is displayed, as shown in the following figure:

Figure 67 The New Import screen

7. 8. 9.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select the Import type from the list. The import types in the list depend on the data source you are working with. Click Finish. The Metadata import wizard opens with the Get Input Files screen, as described in the Selecting the Input Files step.

Selecting the Input Files


This section describes the steps required to select the input files that will be used to generate the metadata. It continues the process after the Starting the Import Process step. To select the input files 1. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copying the files from another machine using FTP. The following figure shows the Add Resource screen.

Working with Metadata in Attunity Studio 6-15

Figure 68 Add Resource Screen

2. 3.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the Machine. To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

4.

5.

Select the files to import and click Finish to start the transfer. The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen of the wizard, as shown in the following figure.

6-16 AIS User Guide and Reference

Figure 69 Import Wizard

6.

To manipulate table information or the fields in the table, right-click the table and select the option you want. The following options are available:

Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. The table attributes are described in Table Attributes. XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.

7.

Click Next to go to the Applying Filters step.

Applying Filters
This section describes the steps required to apply filters on the COBOL Copybook files used to generate the Metadata. It continues the Selecting the Input Files step. To apply filters 1. Click Next, the Apply Filters step is displayed in the editor.

Working with Metadata in Attunity Studio 6-17

Figure 610 Apply Filters Screen

2.

Apply filters to the copybooks, as needed. The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

3.

Click Next to go to the Selecting Tables step.

6-18 AIS User Guide and Reference

Selecting Tables
This section describes the steps required to select the tables from the COBOL Copybooks. The following procedure continues the Applying Filters step.
1.

From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The following figure shows the Select Tables screen.

Figure 611 Select Tables Screen

The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
2.

Select the tables that you want to access (that require Attunity metadata) and then click Next to go to the Import Manipulation step.

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen. To manipulate the table metadata 1. From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.

Working with Metadata in Attunity Studio 6-19

2.

Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

3.

Click Next to continue to the Metadata Model Selection.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:
Figure 612 Import Manipulation Screen

The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant.

6-20 AIS User Guide and Reference

Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 67 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.

Working with Metadata in Attunity Studio 6-21

Figure 613 Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 68 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

6-22 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

Working with Metadata in Attunity Studio 6-23

Table 68 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size

6-24 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Field menu Add Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen: Description

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Structures menu Columnwise Normalization

Select the Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

Working with Metadata in Attunity Studio 6-25

Table 68 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

6-26 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Import Manipulation procedure. This allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

Working with Metadata in Attunity Studio 6-27

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:

6-28 AIS User Guide and Reference

Figure 614 The Metadata Model Selection Screen

Import the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available). To transfer the metadata 1. Select Yes to transfer the matadata to the target computer immediately or No to transfer the metadata later.
2.

Click Finish.

The Import Metadata screen is shown in the following figure:

Working with Metadata in Attunity Studio 6-29

Figure 615 The Import Metadata screen

Working with Application Adapter Metadata


Adapter metadata defines the interactions for the application adapter and the structures of any input and output records used by the interactions. The metadata is stored as an application adapter definition in the SYS repository, on the machine where the adapter is defined. The metadata is in XML format. For information on the adapter metadata schema and using adapters in AIS, see Implementing an Application Access Solution. The following explains how to access the adapter metadata in Attunity Studio. To access adapter metadata 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Configuration View, expand the Machines folder and then expand the machine where you want to add the data source. Expand the Bindings folder and then expand the binding with the adapter metadata you are working with. Expand the Adapters folder and then right-click the adapter for which you want to manage the metadata. Select Show Metadata View from the shortcut menu. You can create and edit the following adapter metadata properties:

Adapter Metadata General Properties: Enter and edit information about the adapter, such as the adapter name and the way in which you connect to the adapter. You make these changes in the Design perspective, Metadata view. Adapter Metadata Schema Records: The input and output record structure for a record in the adapter definition.

6-30 AIS User Guide and Reference

Adapter Metadata Interactions: Enter details of an interaction. The interaction Advanced tab is displayed for some adapters only, such as the Database adapter and includes more details about the interaction.

Adapter Metadata General Properties


You can enter and edit information about the adapter, such as the adapter name and the way in which you connect to the adapter. You make these changes in the Design perspective, Metadata view. The following describes how to begin editing general adapter metadata information. To edit general adapter metadata information 1. In Attunity Studio Design perspective, Metadata view, expand the Adapters folder.
2.

Right-click the interaction that you want to edit, and select Open. The Interaction General Properties editor is displayed.

The following figure shows the Interaction General Properties editor.


Figure 616 Interaction General Properties

This following table describes the fields in this tab:


Table 69 Field Name Adapter Metadata General Tab Description The name of the adapter definition. The definition name is usually the same as the name representing the adapter in the binding. The name of the adapter definition. The definition name is usually the same as the name representing the adapter in the binding.

Description

Working with Metadata in Attunity Studio 6-31

Table 69 (Cont.) Adapter Metadata General Tab Field Version Header Authentication mechanism Description The schema version. A C header file with the data structures for the adapter. This header file is used with the C API for applications. The authentication method. Select from:

kerbv5 none basic password

Max Request Size Max Active connections Max Idle timeout Adapter Specifications

The maximum size, in bytes, for an XML ACX request or response. Larger messages are rejected with an error. The maximum number of machines that can connect to the adapter at a time. By default, this number is unlimited. The maximum amount of time without activity before the connection is timed out. By default, this is 600 seconds. The maximum number of simultaneous connections for an adapter (per process).

Adapter Metadata Schema Records


You edit the input and output record structure for a record in the adapter Record Refinition Properties editor. The following describes how to create the adapter schema records. For information on how to open the editor to edit existing schema records, see Editing an Existing Schema Definition. To create adapter schema records 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Metadata View, expand the Adapters folder. Right-click Schema and select New record. Enter a name for the new record definition. Click OK. The Schema Record editor is displayed. You define field grouping in the Schema Record editor.

6. 7.

Click New Field to add new fields for the record, describing the data structure for the adapter. Continue adding fields until the record structure is complete.

The following figure shows the Schema Record editor.

6-32 AIS User Guide and Reference

Figure 617 Schema Record Editor

The following table describes the fields in this tab.


Table 610 Field Fields list Schema Record Tab Description Defines the single data item within a record. This section has a table with the following three columns:

Name: The name of the field Type: The data type of the field. See the Valid Data Types table for a list of the valid data types. Length: The size of the field including a null terminator, when the data type supports null termination (such as the string data type).

Specifications

Defines specific field properties. To display the properties, select the specific field in the Fields list.

The following table describes the valid data types that can be used when defining these specifications in the Schema Record editor.
Table 611 Binary Date Float Numeric[(p[,s])] Time Valid Data Types Boolean Double Int Short Timestamp Byte Enum Long String

Working with Metadata in Attunity Studio 6-33

Editing an Existing Schema Definition


This section describes how to open the editor for an existing schema record. To edit an existing schema record 1. Open Attunity Studio.
2. 3. 4.

In the Design Perspective Metadata View, expand the Adapters folder. Expand the Schema. Right-click the record you want to edit and select Open. The Schema Record editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Schema Records.

Adapter Metadata Interactions


You define the details of an interaction in addition to its input and output definitions and additional information specific to the interaction in the Interaction editor. The following describes how to create interactions. For information on how to open the editor to edit existing schema records, see Editing an Existing Interaction. To create interactions 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Metadata View, expand the Adapters folder. Right-click the Interactions folder and select New. Enter a name for the new Interaction. Click OK. The Interaction editor is displayed. You define interaction properties and its input and output properties in the Schema Record editor.

6.

Make any changes in the editor that you need. The table below describes the Interactions editor.

The following figure shows the Interaction editor.

6-34 AIS User Guide and Reference

Figure 618 Interaction Editor

The following table describes the fields in this tab:


Table 612 Field Description Mode Adapter Metadata Interaction Tab Description A description of the interaction (optional) Select the interaction mode from the following:

sync-send-receive: The interaction sends a request and expects to receive a response. sync-receive: The interaction sends a request and does not expect to receive a response. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.

Input/Output record Interaction Specific Parameters

Identifies the input and output records. Specific properties for the interaction. When an Interaction Advanced tab is used, this section is not displayed. The parameters shown are for a Legacy Plug Adapter.

Editing an Existing Interaction


This section describes how to open the editor for an existing interaction. To edit an existing interaction 1. Open Attunity Studio.
2. 3. 4.

In the Design Perspective Metadata View, expand the Adapters folder. Expand the Interactions folder. Right-click the interaction you want to edit and select Open.

Working with Metadata in Attunity Studio 6-35

The Interaction editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Interactions.

Interaction Advanced Tab


In the Interaction editor, click Advanced to open the Interaction Advanced tab. This tab is only available for adapters that support complex interactions, such as the data source adapter. Use this to enter advanced details for the interaction. The following figure shows the Interactions Advanced tab.
Figure 619 Interaction Advanced Tab

The fields in this tab are dependent on the specific adapter. In the above screen the tab is displayed for the Database adapter and enables building or modifying an SQL statement. For information on how this tab relates to the Database adapter. See Adapter Metadata Schema Records.

Working with Procedure Metadata


You use procedure metadata when using Procedure data sources (drivers). See Procedure Data Source Reference for a list of available data sources. You can create and import metadata with Procedure metadata in Attunity Studio. Procedure metadata defines the input and output structures that are sent to and returned from a procedure. The following sections describe how to work with procedure metadata in Attunity Studio.

6-36 AIS User Guide and Reference

Manually Creating Procedure Metadata Managing Procedure Metadata

Manually Creating Procedure Metadata


The following procedure describes how to create event metadata when a COBOL copybook is not available. To create procedure driver metadata 1. In the Design perspective Configuration view, right-click the procedure for which you want to manage the metadata, and select Show Metadata View. The Metadata tab is displayed with the selected procedure driver displayed.
2. 3.

Right-click Procedures, and select New Procedure. Enter a name for the new procedure definition, and click OK. The editor opens with the General properties tabs displayed.

4.

Define the metadata as described for the specific procedure driver.

Managing Procedure Metadata


Metadata can be viewed and modified in the Attunity Studio Design perspective Metadata tab. To edit the metadata 1. Right-click the procedure for which you want to manage the metadata, and select Show Metadata View.
2. 3.

Click Procedures to view the defined procedures. Right-click the required procedure (or double-click it) to view or edit its properties. The editor opens and is different depending on which procedure driver you are working with. See the section that describes the driver you are working with for more information:

CICS Procedure Data Source Procedure Data Source (Application Connector) Natural/CICS Procedure Data Source (z/OS)

Importing Procedure Metadata


Import wizards are available for the following procedure data sources:

CICS Procedure Data Source Procedure Data Source (Application Connector)

For other procedures, the metadata is created manually, as described in Adapter Metadata Schema Records. You start importing procedure metadata in the same way as Importing Data Source Metadata with the Attunity Import Wizard.

Working with Metadata in Attunity Studio 6-37

6-38 AIS User Guide and Reference

7
Handling Arrays
This section describes the support methods that Attunity Connect applies to deal with arrays. It includes the following topics:

Overview of Handling Arrays Representing Metadata Methods of Handling Arrays

Overview of Handling Arrays


Attunity Connect exposes a purely relational front-end through APIs like ODBC, JDBC, and ADO. However, it connects to non-relational Data Sources, which include non-relational data models. As such, Attunity Connect provides a logical mapping that exposes the non-relational constructs in a relational manner. The most prevalent problem in this domain is the issue of arrays, or OCCURS constructs, which is described in this section.

Representing Metadata
Before looking at the different methods of handling arrays, you should understand how metadata is represented in Attunity Studio. The following figure shows an example in COBOL that illustrates arrays and nested arrays. When you import this metadata into Attunity Studio, the import process creates an ADD definition that is equivalent to the original structure, usually mapping the fields one-to-one.

Handling Arrays 7-1

Figure 71 Metadata Example in COBOL

The following figure shows how Attunity Studio represents the same data in XML.

7-2 AIS User Guide and Reference

Figure 72 XML Representation of the COBOL Metadata in Attunity Studio

On the Columns tab, Attunity Studio represents the same metadata as shown in the following figure.

Handling Arrays 7-3

Figure 73 Representation of Metadata on the Columns tab in Attunity Studio

Finally, if you run the NavSQL > native STUDENT command from the NavSQL utility, the same metadata is represented as shown in the following figure.

7-4 AIS User Guide and Reference

Figure 74 Representation of Metadata in the NavSQL Utility

Methods of Handling Arrays


Attunity Studio lets you handle arrays by using one of the following methods:

Columnwise Normalization Virtual Tables Virtual Views Sequential Flattening (Bulk Load of Array Data) ADO/OLE DB Chapters XML

Columnwise Normalization
Some small arrays are merely a short hand notation of a sequence of simple columns. For example, an address field with three lines on the screen may have been represented in a COBOL copybook by an OCCURS clause that is used three times, as a
Handling Arrays 7-5

short hand notation of writing three columns, Address_1, Address_2, and Address_3.
Figure 75 Simple Array in COBOL

This is not a real case of hierarchical dependency of data that needs to be modeled in the relational world. The simplest approach for dealing with this simple class of arrays is to replace them with simple columns during the import process. This process of replacing a small array by a sequence of columns is referred to as columnwise normalization. Columnwise normalization is useful for simple arrays that share any of the following characteristics:

They show a small number of occurrences. They do not possess a counter. They contain only one field.

You can choose to normalize an array columnwise in the Import Manipulation panel in Attunity Studio. The following figures show an example.
Figure 76 Selecting Columnwise Normalization in the Import Manipulation Panel

7-6 AIS User Guide and Reference

Figure 77 Flat View Produced by Columnwise Normalization

Virtual Tables
Exposing arrays as virtual tables is a commonly used technique to handle arrays. It generates a virtual table for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. For example, an array called course in a table called student is represented by the virtual table STUDENT_ COURSE. Attunity Studio generates, updates, and removes virtual tables automatically, according to the status of their parent table, and names them by appending the array name to the parent name. When an array includes another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName

The number of nested-array levels is not limited. Virtual tables includes the following columns:

The array member fields from the original structure. A column called _PARENT, which identifies the parent record. This column is a string representation of the parent records bookmark. A column called _ROWNUM, which is the ordinal that identifies the array record within the parent table.

The _PARENT and _ROWNUM columns are generated automatically and cannot be updated. Together, they uniquely identify each row in the virtual table. To identify the array field in the parent record when joining parent and child tables, you can use the _ PARENT field of the virtual table. You cannot edit the definition of virtual tables; you can only manipulate the parent table. Attunity Studio indicates virtual tables by using a differently colored icon in the Metadata view, as shown in the following figure.

Handling Arrays 7-7

Figure 78 Display of Virtual Tables in Attunity Studio

Right-clicking a table in the Metadata view and selecting SQL View opens the SQL view. The following figure shows a sample table that includes the _PARENT and _ ROWNUM fields and replaces the entire array ASSIGNMENTS by a varchar(64) field that includes a string representation of the bookmark.
Figure 79 SQL View of the Virtual Table STUDENT_COURSE

When you look at an actual record, the same fields_PARENT, _ROWNUM, and NUMOF_ ASSIGNMENTSare displayed, as shown in the following figure. You can use the ASSIGNMENTS field to join with the STUDENT_COURSE_ASSIGNMENTS virtual table.

7-8 AIS User Guide and Reference

Figure 710 Actual Record Showing Same Fields as SQL View

The metadata that the ADD stores for the virtual table is only a pointer to the parent table, as indicated by the attribute name, basedOn. It is filled in from the parent table upon first access. You can join the parent and the child by using the courses and _PARENT fields as the join criteria, as shown in the following figure.
Figure 711 Joining Parent and Child

Equally, you can join parent, child, and grandchild, as shown in the following figure.
Figure 712 Joining Parent, Child, and Grandchild

Handling Arrays 7-9

When joining a table and an array table, AIS reads every physical record only once, thus maximizing the joins efficiency in terms of I/O. The virtual table method supports all DML operations, with the following limitations:

Arrays without counters (DEPENDING ON clauses) only support UPDATE commands. INSERT commands ignore the _ROWNUM column and add data at the end. DELETE commands that remove data from the middle of an array move the following elements up and update the counter automatically. Therefore, if you want to delete all the elements from an array, you cannot perform the following loop: for (i=1; i<=counter; i++) delete from array where _PARENT=x and _ROWNUM=i This loop does the following:
1. 2. 3.

It deletes the first member of the array. It moves up the second member of the array so that it becomes the first member. It deletes the second member of the array, which was originally the third member. It does not delete the original second member because that is now the first member of the array.

Alternatively, you can use the following loops: while (counter>0) delete from array where _PARENT=x and _ROWNUM=1 This loop keeps deleting the first member of the array until no member is left to move up. The counter is updated by AIS. while (i=counter; i>0; i--) delete from array where _PARENT=x and _ROWNUM=i This loop deletes the members of the array from last to first. It is slightly more efficient because it avoids the moving of members.

Note:

Every write operation causes physical I/O on the parent record. If you need to use a lot of DML operations on arrays, consider using the XML approach instead.

Virtual Views
Virtual views are a more recent method of handling arrays in AIS. They eliminate the need for a _PARENT field, replacing it by primary key fields from the parent. This makes the processing of array records easier for most applications. Per binding, you can handle arrays either as virtual tables or as virtual views; you cannot mix the two models in the same binding. However, when you switch between virtual tables and virtual views, you do not need to make any changes to the metadata because they both use the same table definitions in the ADD. During the import procedure, Attunity Studio generates virtual views and names them by appending the array name to the parent name. When an array includes

7-10 AIS User Guide and Reference

another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName

The number of nested-array levels is not limited. Virtual views includes the following:

The array member columns from the original structure. The fields from the parents first unique key or all parent fields, depending on the selection during the import process. If all parent fields are included in the virtual view, the parents indexes are available in the view definition and can be used for efficient optimization strategies.
Note:

Inherited keys lose their uniqueness in the virtual view.

If the view does not include all parent fields, the primary key fields (if the primary key is not the parents first unique key). A column called <array>_ROWNUM, which is the ordinal that identifies the array record within the parent table.

The unique key and <array>_ROWNUM columns are generated automatically. Together, they uniquely identify each row in the virtual view and form a unique key. When working with virtual views, consider the following limitations:

Virtual views are read-only. Virtual views currently do not support arrays within variants that have a selector field.

Including all parent fields in the virtual view greatly reduces the need for performing join operations because this in itself is an implicit join. However, if you do perform a join, it is not as efficient as a join of virtual tables because AIS reads each record twice, not once. This means that you are better advised to use Virtual Tables if you need to perform complex, explicit joins. In general, though, the query processor can devise efficient access strategies because AIS copies all relevant indexes from the parent to the virtual view. Attunity Studio indicates virtual views by using a differently colored icon in the Metadata view, as shown in Display of Virtual Tables in Attunity Studio.

Sequential Flattening (Bulk Load of Array Data)


Performing a bulk load of complex data from a non-relational system to a relational database requires a carefully thought-out algorithm that keeps I/O operations at a minimum. In a bulk load scenario, methods such as Columnwise Normalization, Virtual Tables, and Virtual Views require a full scan of the physical file for every single array. An efficient method of performing this task presents a kind of row-wise normalization, called sequential flattening. This method reads all data in the physical file in a single scan. Sequential flattening replaces arrays in a non-relational system by a sequence of rows. It maps all the record fields of the non-relational file to a single table that contains both parent and child records. In this way, sequential flattening enables the reception of a stream of data by using a single SELECT statement.
Handling Arrays 7-11

The sequentially flattened view of a complex table is referred to as single table. You can choose to create a single table in Attunity Studio, by selecting Sequential View during the Metadata Model Selection step.
Figure 713 Selecting a Single Table View

The flattened table is called <table>_ST, where <table> is the name of the parent table and ST indicates a single table. For example, if a parent table is called STUDENT, the single table is called STUDENT_ST. The structure of the single table is identical to the original tables structure, except that AIS removes all array dimensions and adds some control fields. When reading a record, AIS performs a tree traversal of the parent and its array hierarchy. Each record in the resulting recordset deals with a specific array member; other arrays are nulled out, and the _LEVEL control field indicates the current array in the tree traversals hierarchy. The sequentially flattened single table includes the following columns:

The parent fields, that is the non-repeating fields. The array fields for all arrays within the parent. A column called __LEVEL, which indicates the current child level. The value comprises a concatenation of parent and child names. A column called __SEQUENCE, which indicates the sequence of the single table row in the physical, non-relational table. A value of 1 indicates a parent record. For each array, an optional column called <array>_ROWNUM, which identifies the row in the array. This column is generated automatically for the array.

The sequentially flattened single table includes the following rows:

7-12 AIS User Guide and Reference

A row for each parent record only, without reference to the arrays. All rows with an empty __LEVEL column and a value of 1 in the __SEQUENCE column indicate a parent record only. A record for each array record.

The following figure represents a non-relational data source with arrays.


Figure 714 Non-relational Data Source with Arrays

The following figure shows the metadata that sequential flattening produces for this data source.
Figure 715 SQL View of the Single Tables Metadata

Handling Arrays 7-13

The next figure shows the actual single table. It contains a column for each row in the SQL view above.
Figure 716 STUDENT_ST with All Parent and Child Records
Details of the first course. The first row shows the course itself, and the next three rows show the course assignments. Once the first course has been retrieved, the next course is retrieved with its assignments. Following all the courses, the next array (BOOK) is retrieved.

These rows belong to a single parent record.

During a bulk load, the rows are read one by one and written to the appropriate target file for each row, based on the __LEVEL field.

ADO/OLE DB Chapters
ADO and OLEDB introduced the concept of an embedded rowset, called a chapter, as part of the API. A chapter is a result set embedded in a single column within the primary resultset that you can drill down to. This model works well with complex legacy data that includes arrays. AIS supports chapters and lets you update them. From a metadata perspective, the array is represented by a chapter column and contains a chapter identifier. As such, the chapter functions as a handle to the data that it links to. Any ADO-based programs, particularly Visual Basic programs, can use chapters to handle arrays, such as the Attunity-provided SQL sample utility, ChapView. The following figure illustrates how ChapView handles chapters.

7-14 AIS User Guide and Reference

Figure 717 Chapter-Handling in ChapView


Double-clicking this COURSE [CHAPTER] Double-clicking this ASSIGNMENTS [CHAPTER] field field opens the Chapter - COURSE window. opens the Chapter - COURSE/ASSIGNMENTS window.

A look at a Microsoft Visual Basic code snippet using ADO helps understand how chapters are embedded.
Figure 718 Visual Basic Code Snippet Using ADO
oRST is the main recordset. For the embedded recordset, oRSTChapter is used. However, both are simple recordset objects.

. . .

The embedded recordset is the value of the COURSE column in the main recordset. From here on, everything is simple Visual Basic recordset syntax.

Handling Arrays 7-15

Note:

Chapters are not supported with virtual views and sequential flattening. However, you can use chapters and virtual views in the same Visual Basic program, as needed.

Chapter Handling in Query and Database Adapters


Query and database adapters support chapters. In a way, they open every chapter that they encounter and include the embedded rowset within the resulting XML document. For example, the query select * from student->course produces the output shown in the following figure.
Figure 719 Chapters Represented in XML

For more information on chapters, see the OLE DB and ADO documentation.

7-16 AIS User Guide and Reference

XML
Forcing non-relational structures that include arrays to fit the relational mould is an endless source of complexity, with many potential efficiency issues. The method of using XML to handle arrays is a way to get around the restrictions of the relational model while maintaining the use of SQL. It makes conventional use of SQL, but instead of specifying a list of columns in the select clause, it transfers the records data to the client application as a BLOB column that contains the XML representation of the data. This method works for both read and write operations. By using XML, you can have a client application write an entire complex structure in a single I/O operation. Approaching the same task with virtual tables would result in multiple I/O operations (typically one I/O operation per array member). To be able to use XML, you need to make sure that the environment property exposeXmlField in the misc section of the binding definition is set to true. By default, this property is set to false. You can use the XML method from any supported client interface, including JDBC, ODBC, and ADO. It is also commonly used in the context of the query or database adapter. If you use a query adapter, the XML documents that result from a select * clause and a select XML clause are almost identical. However, the select XML clause produces a natural XML representation while the select * clause produces a hierarchical rowset representation of the record, as shown in the following figures.

Handling Arrays 7-17

Figure 720 Output of a Conventional select * Clause

7-18 AIS User Guide and Reference

Figure 721 Output of a select xml Clause

Handling Arrays 7-19

Similarly, you can combine DML with XML to insert a complex record with arrays during a single I/O operation. For example, you could run the following query to add another record to the STUDENT table:
<update sql="insert into navdemo:student(xml) values(?)" executionMode="insertUpdateXml"> <inputParameter type="xml"> <STUDENT ID="1" FIRST_NAME="JOHN" LAST_NAME="SMITH" DATE_OF_ BIRTH="1984-12-01"> <COURSES COURSE_ID="1" COURSE_TITLE="MATH 1" INSTRUCTOR_ID="1"> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES-1" DUE DATE="2004-03-14" GRADE="4.0"/> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES" DUE_ DATE="2004-03-14" GRADE="4.5"/> </COURSES> <BOOKS ISBN="1234" RETURN_DATE="2004-05-02"/> </STUDENT> </inputParameter> </update>

Making use of XML is recommended when your application environment is Web-based. This method has the lowest overhead and performs better than chapters in Web environments.

7-20 AIS User Guide and Reference

8
Using SQL
This section contains the following topics:

Overview of Using SQL Batching SQL Statements Hierarchical Queries Copying Data From One Table to Another Passthru SQL Writing Queries Using SQL Locking Considerations Managing the Execution of Queries over Large Tables

Overview of Using SQL


Within an application, you can use the following versions of SQL to access data:

SQL specific to the data source you want to access. Attunity SQL, which is based on standard ANSI 92 SQL. The full syntax for Attunity SQL is described in Attunity SQL Syntax. The Attunity SQL incorporates enhancements, including SQL access to a data source that does not natively support SQL.

Whatever version of SQL you use, you can always use the Attunity SQL extensions to incorporate additional features. For information about customizing the way the SQL is processed, based on the data source being accessed, see Using the Attunity Connect Syntax File (NAV.SYN). You can test SQL interactively to ensure the correct results, using NAV_UTIL EXECUTE.

Batching SQL Statements


You can process multiple queries within an ADO or ODBC application by batching the SQL statements, one after the other, separated by semi-colons (;), as in the following example:
sql1;sql2;sql3

Parameters are passed as a group for all the queries in the same order as they appear in the individual queries.

Using SQL 8-1

You cannot use this syntax to batch SQL statements from a Java application or via the NAV_UTIL EXECUTE utility or ADO Demo application supplied with Attunity Connect.

ADO Notes

Set the Multiple Results connection property to enable multiple results (before executing the compound query):
oConn.Properties("Multiple Results") = 1

The results of the first query are displayed. To see the results of the next query, request NextRecordSet.

Hierarchical Queries
This section contains the following topics:

Generating Hierarchical Results Using SQL Accessing Hierarchical Data Using SQL Flattening Hierarchical Data Using SQL Using Virtual Tables to Represent Hierarchical Data Hierarchical Queries From an Application

A hierarchical query is a query whose result is a hierarchy of rowsets linked by chapters (see Chapter), reflecting parent-child relationships. For example, Customers and Orders may constitute a hierarchical rowset, with each chapter of the child Orders rowset corresponding to all of the orders of one customer in the parent Customers rowset. Rowsets with arrays of structures as columns (which are supported by certain providers) are modeled such that the rows of an array constitute the children of a column in the containing parent row. Hierarchical queries enable you to do the following:

Arrange rowsets resulting from a query in a hierarchy, reflecting a parent-child relationship. Use nested SELECT statements to do this. Manipulate data that is stored hierarchically in a data source (such as information stored in arrays in RMS). Currently arrays are supported in the Adabas, CISAM, DISAM, DBMS, Enscribe, RMS, and VSAM drivers. You can handle this type of data in the following ways:

By including the hierarchical data as chapters, reflecting a parent-child relationship. By flattening the hierarchical data (see Flattening Hierarchical Data Using SQL). By using virtual tables to represent the data (see Using Virtual Tables to Represent Hierarchical Data). You can use virtual driver columns in a DBMS database to produce a chaptered result.

See Hierarchical Queries From an Application to see how the SQL is incorporated in an application.
8-2 AIS User Guide and Reference

Chapter
A chapter is a group of rows within a hierarchy the chapter constitutes a collection of children of some row and column in a parent rowset. The column in the parent rowset is called a chapter column and contains a chapter identifier. The name of the column is also the name identifying the child rowset (which is meaningful only in the context of the parent rowset).

Generating Hierarchical Results Using SQL


A hierarchical query nests a SELECT statement as one of the columns of the rowset retrieved by a nested SELECT statement. You use braces ({}) to delimit the nesting. This type of query generates a chapter, which enables you to incorporate drill-down operations in the application.

Example
The following hierarchical query produces a child rowset:
SELECT C_name, {SELECT O_orderkey, {SELECT L_partkey, L_linenumber FROM lineitem WHERE L_orderkey = O_orderkey} AS items FROM torder WHERE O_custkey=C_custkey} AS orders FROM customer

The main (root) rowset has two columns. The second column (orders) is a chapter. The result has a three-tier hierarchical structure as displayed in this figure.
Figure 81 Hierarchical SQL Query Producing Child Rowset

In the SQL Utility, click a field in the second column ([CHAPTER]) to display the contents of the chapter. The second column can be opened to display another (child) rowset. This child rowset includes items, a chaptered column that can be opened to display another child rowset (showing the L_partkey and L_linenumber columns for the opened chapter) as displayed in this figure:
Figure 82 Hierarchical SQL Query Producing Multiple Child Rowsets

Using SQL 8-3

You can display chapters for only one parent row at a time. For example, you can display the set of orders for only one customer at a time. You can perform drill-down operations from an ADO, ODBC and JDBC-based application. For details, see Hierarchical Queries From an Application.

Accessing Hierarchical Data Using SQL


Data stored hierarchically in a data source (such as information stored in arrays in RMS) can be referenced by using a -> to denote the parent child relationship in the source:
FROM parent_name->chapter1->chapter2 [alias]

Or, using an alias for the parent table:


FROM parent_alias->chapter1->chapter2 [alias]

For example, the following hierarchical query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee:
SELECT emp_id,(SELECT name,age FROM e->hchild) FROM disam:emp_struct e

For more details about nesting a SELECT statement in the FROM clause of another SELECT statement, refer to Flattening Hierarchical Data Using SQL. Without an alias the query lists for each employee all of the children of all of the employees:
SELECT emp_id,(SELECT name,age FROM disam:emp_struct->hchild FROM disam:emp_struct

The chaptered data is specified as part of the source syntax of the FROM clause. You can use chaptered data:

In an outer SELECT statement to flatten the chaptered data. Anywhere a FROM clause is used.

Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 81 All Employees Children

The following query retrieves information about children of all employees:


SELECT * FROM emp_struct->hchild

Access to the hchild rowsets is via the parent rowset emp_struct. Apart from this access, the employee data is not required by the query.

When child rowsets can be accessed as normal tables, it is generally more efficient to access them directly. The presumption in the previous example is that the child information is accessible only through a parent rowset (such as emp_struct). If both the father and the mother of a particular child are employees, this child will appear in two different chapters (once through the father rowset and again

8-4 AIS User Guide and Reference

through the mother rowset). This query may therefore return the information for this child twice. If this behavior is undesirable, use the SELECT DISTINCT* syntax.
Example 82 Childrens Schools

The following query counts the number of distinct schools where the children of all employees studied:
SELECT COUNT(DISTINCT school_name) FROM emp_struct->hchild->hschool

This example illustrates a two-level hierarchy, with the child rowset hchild itself serving as a parent of the child rowset hschool.
Example 83 Employees with Children

The following query retrieves information about employees who have at least one child:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild)

Note that E->hchild refers to the children of the given employee, not of all employees, because the nested SELECT clause refers to the same rowset as the outer SELECT clause. The emp_struct rowset appears in the same role in both the outer and the inner query, with the name E (whose use is mandatory).
Example 84 School Age Children

The following query retrieves information about all children who have attended at least one school:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)

The same scope rules as those of example 3 apply: the nested subquery SELECT * FROM C->hschool is executed for the given row of hchild and serves to filter out the children who have not attended any school.
Example 85 Multiple Hierarchies

To better understand the scope issues when multiple hierarchies are involved, consider the following queries:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild->hschool)

and:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)

The nested subquery looks similar in both of these queries, but in the first query the outer rowset that defines the context is the emp_struct row, while in the second query it is the row of the child of an employee. Thus, the first query specifies employees such that at least one of the employees children went to at least one school, and the scope of the subquery is all schools of all children of this employee. The second query specifies all children that went to school, and the scope of the subquery is all of the schools of this child.

Flattening Hierarchical Data Using SQL


You can produce a flattened view of hierarchical data by embedding a SELECT statement inside the list of columns to be retrieved by another SELECT statement. You

Using SQL 8-5

use parentheses to delimit the nesting. This is equivalent to specifying a left outer join (LOJ) between the parent rowset and a child rowset, and the resulting rowset reproduces each row of the parent, combining it with one row for each of its children. The nested SELECT statement can reference a child rowset (using the parent->child syntax) only in its FROM clause.

Using an Alias
In order to list the hierarchical data with the parent data only, you must use an alias for the child data. Without an alias the query lists for each parent row all of the children of all of the parent rows. Compare the following queries:
SELECT emp_id,(select name from employee->hchild)from employee

and
SELECT emp_id,(select name from e->hchild)from employees e

The first query, without an alias, lists for each employee all of the children of all of the employees. The second query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee.

Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 86 Number of Children in School Per City

The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM emp_struct->hchild->hschool A WHERE A.city = B.city) FROM cities B

This query demonstrates the use of a nested query within a SELECT statement, using tables and not aliases.
Example 87 Employee and Child Information

The following query retrieves the employee ID, address, the number and names of children of each employee:
SELECT emp_id, child_counter, (SELECT name FROM E->hchild), Address FROM emp_struct E

8-6 AIS User Guide and Reference

This figure displays an SQL query that....


Figure 83 Example 2 of a Hierarchical SQL Query

Employees who have no children (such as employee 1122 in the example) appear and the corresponding child row is Null (and the output is like that of a left outer join). When more than one child rowset is specified in the query (by including more than one nested SELECT statement), the parent row is reproduced a sufficient number of times to accommodate the largest number of the parents' children, and the data from all children appears in parallel. Thus, child rows are paired randomly, resulting in side-by-side columns for the child rowsets. When one of the child rowsets is out of rows, its columns are padded with NULLs. See example 3, below.
Example 88 Employees Salary and Child Information

For each employee, the following query retrieves the employee ID, the number of the employees children, the first two salaries of the year and the names of the children:
SELECT emp_id, child_counter, (SELECT sal FROM E->Sal WHERE month IN ('JAN', 'FEB')), (SELECT name FROM E->hchild) FROM emp_struct E

This figure displays an SQL query:


Figure 84 Example 3 of a Hierarchical SQL Query

Note:

Note the padded NULLs in the example.

When multiple levels of hierarchies are involved, the flattening operation is repeated by nesting a SELECT statement inside another nested SELECT statement. Conceptually a left outer join is performed between the parent and child rowsets as above, and then another left outer join is applied to the result and to the grandchild rowset. See Example 4, below.

Using SQL 8-7

Example 89 Multiple Hierarchical SQL Query

The following query retrieves the employee ID, address, the number and names of the children of each employee and the number of different schools at which each child studied:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E

This figure displays an example of a hierarchical SQL Survey.


Figure 85 Example 4 of a Hierarchical SQL Query

In this example the query at the deepest level of nesting is an aggregate (COUNT DISTINCT), which returns a single value. A request for the names and addresses of the schools would result in the child information being repeated for each school where the child studied.
Example 810 Multiple Hierarchical SQL Query

To order the retrieved information according to the employee ID and the child name, use a query like the following:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY emp_id, name

This figure displays an example of a hierarchical SQL query.


Figure 86 Example 5 of a Hierarchical SQL Query

Example 811

Multiple Hierarchical SQL Query

To order the rowset according to the child name and the number of distinct schools where the child studied, use a query like the following:

8-8 AIS User Guide and Reference

SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY 3, 4

This figure displays an example of a hierarchical SQL query.


Figure 87 Example 6 of a Hierarchical SQL Query

You cannot specify ORDER BY inside a nested query. Ordering of the result rowset is done in the main query only. Columns can be referenced by name or by ordinal number. Ordinal numbers are determined by the order the columns appear in the result set.
Example 812 Multiple Hierarchical SQL Query

The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM employees->hchild->hschool A WHERE A.city = B.city) FROM cities B

Using Virtual Tables to Represent Hierarchical Data


You can handle arrays and array-like structures (such a periodic group fields in Adabas) in non-relational data sources by converting the array data into a virtual table that resides in memory. The name assigned to the virtual table is a composite, consisting of the parent field name and the array name. For example, if a parent field is called Employee and the array is called Empchild, the virtual table is called Employee_Empchild. When arrays are nested within arrays, the virtual table name includes the names of all the parent fields and array names. For example, if a parent field is called Employee and an array called Empchild includes a nested array called EmpChildSchools, the virtual table name is called Employee_Empchild_EmpChildSchools. A virtual table includes the following columns:

The array fields. A column called _parent, which contains the bookmark of the parent field. This column is generated automatically for the array and is read-only. A column called _rownum, which identifies the row in the virtual table. This column is generated automatically for the array and is read-only.

Using SQL 8-9

The fields _parent and _rownum together uniquely identify each row in the virtual table. Use the _parent field of the virtual table to identify the array field in the parent record when joining parent and child tables. Example A record NATION stores data about countries for a mail order application and includes an array structure REGIONS. The array structure is converted into a virtual table NATIONS_REGIONS. The following SQL accesses data from the virtual table NATIONS_REGIONS:
SELECT * FROM NATION N, NATION_REGIONS R WHERE R._PARENT = N.REGIONS

Once a virtual table has been created, it can be used in applications in the same way as any other table. Note that this SQL statement accessing a virtual table is equivalent to the SQL statement using the -> syntax to access the array data in the table:
SELECT * FROM NATION->REGIONS

Creating Virtual Tables


Virtual tables are automatically created for arrays. These tables are displayed in lists of tables when building queries, such as in the Attunity Studio Query Tool. Reference the virtual table in the query as you would reference any other table, using the name of the generated repository entry as the table name. During import of metadata you can flatten an array so that each entry in the array is represented by a field in the table. The recommendation is to use the virtual tables generated by Attunity Connect and not to flatten the arrays, unless there are specific reasons, such as a small static array. You can override the default behavior as follows:
1. 2.

Right-click the data source in Attunity Studio and select Edit Data Source. In the advanced tab set the Use runtime columnwise flattening property. This property flattens the array within the table by expanding the array into separate fields with each row in the array displayed with the row number appended to the field name.

An array with the Use runtime columnwise flattening property set is read-only.

Hierarchical Queries From an Application


This section contains the following topics:

Drill-down Operations in an ADO Application Drill-down Operations in an ODBC Application ODBC Drill-down Operations Using RDO ODBC Drill-down Operations Using C Drill-down Operations in a Java Application

You can use hierarchical queries specified with Attunity SQL to execute drill-down operations on data within the application. How the hierarchical query for drill-down operations is implemented varies according to the applications API:
8-10 AIS User Guide and Reference

For ADO applications: Use the standard ADO methods and properties (see Drill-down Operations in an ADO Application below. For ODBC applications: Use functionality provided with Attunity Connect (see Drill-down Operations in an ODBC Application). For Java: Use standard Java (see Drill-down Operations in a Java Application).

Drill-down Operations in an ADO Application


Support is provided for hierarchical queries using standard ADO methods and properties. The following code is an example of how to manipulate chapters:
Dim Dim Dim Dim oConn As New ADODB.Connection oRST As New ADODB.Recordset oChild As ADODB.Recordset sSQL As String

' Set active connect oConn.ConnectionString = "Provider=AttunityConnect;" oConn.Open sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only oRST.Open sSQL, oConn, adOpenForwardOnly, adLockReadOnly While Not oRST.EOF ' Get the Customers chapter Set oChild = oRST("Customers").Value While Not oChild.EOF ' Code that manipulates the chapter resultset oChild.MoveNext Wend ' Release chapter Set oChild = Nothing oRST.MoveNext Wend Set oRST = Nothing Set oConn = Nothing

Drill-down Operations in an ODBC Application


Support is provided for hierarchical queries from both C and RDO applications, as follows:

ODBC Drill-down Operations Using RDO: In RDO you use standard methods and properties for rdoConnection, rdoQuery, rdoResultset and rdoColumn objects and their collections. ODBC Drill-down Operations Using C: In C you use standard ODBC APIs with standard arguments.

Additionally, a number of functions are provided that you can incorporate in an application in order to utilize hierarchical queries. These functions use new descriptor types that have been added to the ODBC API SQLColAttributes: SQL_COLUMN_IS_ CHAPTER_ and SQL_COLUMN_ORDINAL_.

Using SQL 8-11

To determine which column is a chapter, use the descriptor type SQL_COLUMN_ IS_CHAPTER_. To determine the ordinal position for any column, use the descriptor type SQL_ COLUMN_ORDINAL_.

ODBC Drill-down Operations Using RDO


Support is provided for hierarchical queries in RDO applications. In addition to the standard methods and properties for rdoConnection, rdoQuery, rdoResultset and rdoColumn objects and their collections, the following functions are provided:

GetChapterInfo: Returns chapter information. For details, see GetChapter(RDO). OpenEmbeddedRowset Opens an embedded rowset for a parent chapter column. For details, see OpenEmbedded Rowset (RDO).

The following code is an example of how to manipulate chapters in RDO:


Dim oConn As New rdoConnection Dim oRST As rdoResultset Dim rsChild As rdoResultset Dim quChild As New rdoQuery ' Dim sSQL As String ' ///// RDO Chapter support Dim IsChapterCol() As Long ' Dim cChaptCols As Long Dim sCursorName As String ' RDO Connection

' Resultset for chapter ' Query for chapter (required by OpenEmbeddedRowset)

' An array which indicates that a specific column is a chapter ' Number of chapter columns ' Cursor Name

' Set active connect, Attunity-Demo is the DSN Set oConn = rdoEnvironments(0).OpenConnection("Attunity-Demo", RDO.rdDriverNoPrompt, False, "UID=;PWD=;") ' SQL statement sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only Set oRST = oConn.OpenResultset(sSQL) ' Redims according to the number of columns ReDim IsChapterCol(oRST.rdoColumns.Count - 1) 'Calling Attunity Connect help routine to set the necessary info GetChapterInfo oRST, sCursorName, IsChapterCol(), cChaptCols While Not oRST.EOF ' Open Customers chapter - lock type of child is derived from parent OpenEmbeddedRowset sCursorName, oRST("Customers"), oConn, quChild, rsChild, oRST.LockType If rsChild Is Nothing Then ' Failed to open chapter MsgBox "Failed to open the chapter!" Exit Sub End If 8-12 AIS User Guide and Reference

While Not rsChild.EOF ' Code that manipulates the chapter resultset rsChild.MoveNext Wend ' Release chapter Set oChild = Nothing oRST.MoveNext Wend Set oRST = Nothing Set oConn = Nothing

Notes:

Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets. To identify a chapter column, use the Attunity Connect RDO function GetChapterInfo. If cChaptCols > 0 is returned by GetChapterInfo, the parent resultset includes chapter columns. The information about child rdoResultset objects and parent chapter columns is set in a special array. Go to the needed row in the parent rdoResultset by using the oRST.MoveFirst, oRST.MoveNext or oRST.Move methods. Call the parent chapter column using the OpenEmbeddedRowset function.
If you want to use the Requery method on the child rdoResultset objects, save information about the embedded rowsets in an array. Use a user structure similar to the following: Type TypeChildResultset clParentOrdinal As Integer ' Ordinal position of ' parent chapter column rsChild As rdoResultset quChild As New rdoQuery End Type You need to save only the ordinal position of a parent chapter column. You can refer to any rdoColumn object using the following syntax: rsParent.rdoColumns(clParentOrdinal 1). Close all child rdoResultset objects before closing the parent rdoResultset object.

Using SQL 8-13

GetChapter(RDO) GetChapterInfo returns the following chapter information:


The number of chapter columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind the parent column with child rowsets).

GetChapterInfo returns TRUE if successful.

Syntax
GetChapterInfo rsParent, sCursorName, isChapter(), cChaptCols

where

rsParent (input): A parent rdoResultset object. sCursorName (output): A string containing the parent cursor name. isChapter() (output): An array of long isChapter flags for each column of the parent resultset. The flag is set to 1 for a chapter column and unset for any other column. You can use the isChapter array with each column of the parent rdoResultset to determine whether the current column is a chapter. The isChapter flag is set by calling for each column of parent rdoResultset the ODBC API SQLColAttributes with the additional descriptor type SQL_COLUMN_IS_ CHAPTER_. If a chapter column exists, the cursor name is retrieved by calling the ODBC API SQLGetCursorName.

cChaptCols (output): A long value returns the number of chapter columns in the parent resultset.

In its pfDesc argument, SQLColAttributes returns 1 for chapter columns and 0 for non-chapter columns.

OpenEmbedded Rowset (RDO) OpenEmbedded Rowset opens an embedded rowset for a parent chapter column. This method does not return data from the embedded rowset. Syntax: OpenEmbeddedRowset sCursorName, clParent, cn, quChild, rsChild where

sCursorName (output): A string with the parent cursor name. clParent (input): An rdoColumn object representing the parent chapter column. Cn (input): An rdoConnection object. quChild (input/output): A child rdoQuery object. The first time the chapter is opened for this parent column, quChild is NULL. rsChild (input/output): A child rdoResultset object. The first time the chapter is opened for this parent column, rsChild is NULL.

ODBC Drill-down Operations Using C


An example C program that manipulates chapters is included in the Attunity Server installation, in the Sample\OdbcDemo directory under the directory where Attunity Server is installed. For HP NonStop Platforms: The sample program is in the subvolume where Attunity Server is installed

8-14 AIS User Guide and Reference

For z/OS Platforms: The sample program is in NAVROOT.SAMPLES.ODBCDEMO, where NAVROOT is the high-level qualifier of the Attunity Server installation When handling chapter data, note the following:

Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets, using the Attunity-provided function GetChapterInfo. GetChapterInfo returns 1 when the parent resultset includes chapter columns. Open each chapter using the Attunity OpenEmbeddedRowset function. When you call this function with the prepared child statement, the function changes only the active chapter bookmark and calls SQLExecute.
Note:

Free all child statements before freeing the parent statement.

GetChapterInfo (C)
GetChapterInfo returns the following chapter information:

The number of columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind it with child rowsets).

GetChapterInfo returns 1 if successful.

Syntax
int GetChapterInfo(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmt, UCHAR* szCursorName, COLDESC *pColDesc, SWORD cCols, long *cChapters)

where:

henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. szCursorName (input/output): The parent cursor name. Allocate szCursorName before calling GetChapterInfo. pColDesc (input/output): An array of descriptor information for all columns. cCols (input): The number of columns in the resultset. cChapters (output): The number of chapter columns in the parent resultset.

To save standard and additional descriptor information for each column, use the Attunity user structure stColumnDescriptor (see stColumnDescriptor User Structure (C)).

Using SQL 8-15

stColumnDescriptor User Structure (C)


stColumnDescriptor is a user structure to save standard and additional descriptor

information for each column:


typedef struct stColumnDescriptor { UDWORD cbPrec; /* Precision of the column */ SWORD iCol; /* Column number */ UCHAR szColName[MAX_COLNAME+1]; /* Column name */ SWORD fSQLType; /* SQL data type */ Char szTypeName[MAX_COLNAME+1]; /* Name of the SQL type */ SWORD fSQLCType; /* C data type */ SWORD cbScale; /* Scale of the column */ SWORD fNullable; /* Indicates if column allows NULL values */ SDWORD iLength; /* Length in bytes of the column */ PTR rgbValue; /* Buffer used in SQLGetData */ SDWORD dataOffset; /* Data offset in rgbValue buffer */ SDWORD cbValueMax; /* Maximum length of rgbValue */ SDWORD *cbValue; /* SQL_NULL_DATA or total number of bytes available to return in rgbValue */ SDWORD iChapterOrdinal; /* Save chapter ordinal position for chapter columns, 0 for other columns */ SQLHSTMT hstmtSon; /* hstmt handle for child rowset binding with this column */ } COLDESC;

OpenEmbedded Rowset OpenEmbeddedRowset opens a chapter (embedded rowset). This function does not return data from the embedded rowset. Syntax: int OpenEmbeddedRowset(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmtParent, UCHAR* szParentCursorName, COLDESC *pOneColDesc) where:

henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. SzParentCursorName (input): The parent cursor name. POneColDesc (input/output): The pointer to the descriptor information of the parent chapter column.

Drill-down Operations in a Java Application


Support is provided for hierarchical queries using the OTHER type and getObject of the JDBC driver. The following code is an example of a method to determine whether a column is a chapter column:
public static boolean IsChapter(ResultSetMetaData parentRsmd, int column) throws SQLException { if (parentRsmd.getColumnType(column) == java.sql.Types.OTHER) { String sTypeName = parentRsmd.getColumnTypeName(column); 8-16 AIS User Guide and Reference

return sTypeName.equalsIgnoreCase("CHAPTER"); } else { return false; } }

This code checks both the column type and the column type name. In the code you need to check only one of them. To retrieve a chapter column, call getObject and cast it to a ResultSet as follows:
ResultSet rsChild = (ResultSet)parentRs.getObject(column);

Because of the need to cast the returned object, first make sure that the column is a chapter column.

Copying Data From One Table to Another


Using the SQL SELECT statement in an INSERT statement enables data from one table to be copied into another table. The tables can be in different data sources, as long as the data types of data retrieved by the SELECT statement match the data types of the columns inserted in the table. For example:
insert into oracle:employees select * from disam:emp

Passthru SQL
SQL UPDATE, INSERT, DELETE, DDL statements anFor all SQL During a SessionFor all SQL During a Sessiond SELECT statements can be passed directly to a relational data source, instead of being processed by the Query Processor. A data retrieval query can include joins where the data from one or more of the data sources is processed directly by the data source instead of the Query Processor. This is particularly useful when dealing with a data source whose commands are not in standard SQL and whose data you want to join with data from other data sources HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query. For statements that do not return rowsets, you can pass SQL directly to the data source in one of the following ways:

For a Specific SQL Statement: For a specific SQL statement from within the application. For all SQL During a Session: For all SQL during a session from within the application.

All SQL statements (both statements that do not return rowsets and statements that do return rowsets) can be passed as part of the SQL itself. This is particularly useful when dealing with a non-SQL data source, whose data you want to join with other SQL data. Passthru queries are not supported for complex objects, such as BLOBs.

Using SQL 8-17

For a Specific SQL Statement


You can bypass the Query Processor for SQL statements that do not return rowsets for specific queries using the following:

Via ADO Via RDO and DAO

Via ADO
To enable passthru SQL for a specific statement, use the Command object and pass it the Operating_Mode parameter. All SQL during this connection will bypass the Query Processor. To reset the connection to channel the SQL through the Query Processor, reset the Operating_Mode parameter to NULL. Example The following example code shows an ADO connection to an Oracle database. Using the Command object, individual lines of SQL can be passed directly to the database. You specify the connection with the Operating_Mode parameter set to Passthru. All SQL, until the Operating_Mode parameter is reset to NULL, subsequently bypasses the Query Processor.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object

Private Sub Bypass_Qpex2() Dim Dim Dim Dim Dim rs As New ADODB.Recordset conn As New ADODB.Connection conn1 As New ADODB.Connection cmd As New ADODB.Command cmd1 As New ADODB.Command

-------------------------------------- An example of using a Passthru Command --------------------------------------conn1.ConnectionString = "Provider=AttunityConnect" conn1.Open conn1.DefaultDatabase = "Oracle" cmd1.ActiveConnection = conn1 cmd1.Properties("Operating_Mode") = "Passthru" cmd1.CommandText = "ALTER TABLE mytbal ADD new_column INTEGER" cmd1.Execute ------------------------------------------- Resetting the Passthru mode for the command -------------------------------------------cmd1.Properties("Operating_Mode") = "" Set cmd1 = Nothing conn1.Close Exit Sub End Sub

8-18 AIS User Guide and Reference

Via RDO and DAO


You can pass SQL directly to the data source using either RDO or DAO. With DAO, use the ODBCDirect workspace to pass a query directly to the data source. Use the SQLGetConnectOption and SQLSetConnectOption APIs and a long constant set to 1000. Pass a value of 1 as the last parameter of these APIs if you want to pass the SQL directly to the data source. When you open the connection to the data source, use the hDbc property, which returns a Long value containing the ODBC connection handle created by the ODBC driver manager corresponding to the specified Connection object. Example The following example code highlights the statements that enable passthru queries and that execute two SQL statements in passthru mode (creation and deletion of a table). The code does not show other statements and functions (such as the connection function) that would also be required).
Declare Function SQLGetConnectOption Lib "odbc32.dll" (ByVal hstmt&, ByVal fOption%, pvParam As Any) As Integer Declare Function SQLSetConnectOption Lib "odbc32.dll" (ByVal hstmt&, ByVal fOption%, pvParam As Any) As Integer Const SQL_STMT_MODE_ As Long = 1000 Public sCreateSQL As String Public sDropSQL As String Public rdoCn As New rdoConnection Public Sub PassThrough(hDbc As Long) Dim rc As Integer Dim UsePassThrough As Long --------------------------------------- Set Passthru mode ---------------------------------------UsePassThrough = 1 rc = SQLSetConnectOption(hDbc, SQL_STMT_MODE_, ByVal UsePassThrough) rc = SQLGetConnectOption(hDbc, SQL_STMT_MODE_, UsePassThrough) --------------------------------------- Execute Passthru queries ---------------------------------------sCreateSQL = "create table tat(c number(8))" sDropSQL = "drop table tat" rdoCn.Execute sCreateSQL rdoCn.Execute sDropSQL --------------------------------------- Reset the Passthru mode ---------------------------------------UsePassThrough = 0 rc = SQLSetConnectOption(hDbc, SQL_STMT_MODE_, ByVal UsePassThrough) rc = SQLGetConnectOption(hDbc, SQL_STMT_MODE_, UsePassThrough) End Sub

Using SQL 8-19

For all SQL During a Session


You can bypass the Query Processor for SQL statements that do not return rowsets when using ADO, RDO and DAO, ODBC, or JDBC APIs. The method involved for RDO and DAO is the same method required if you want to pass a specific SQL statement directly to the data source, as described in For a Specific SQL Statement. For JDBC, you specify the Passthru parameter in the connection string, described in the JDBC Connection String section.
Note: Attunity does not recommend using this option, since it impacts on every DDL SQL statement, even if only some statements were intended.

Via ADO/OLE DB
Use the Connection object and pass it the Operating_Mode parameter set to Passthru in order to enable passthru SQL. All SQL during this connection will bypass the Query processor. Example The following example code shows an ADO connection to an Oracle database. The connection is specified with the Operating_Mode parameter set to Passthru.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object

Private Sub Bypass_Qpex1() Dim Dim Dim Dim Dim rs As New ADODB.Recordset conn As New ADODB.Connection conn1 As New ADODB.Connection cmd As New ADODB.Command cmd1 As New ADODB.Command

----------------------------------------- An example of using a Passthru Connection -----------------------------------------conn.ConnectionString = "Provider=AttunityConnect; Operating_Mode=Passthru" conn.Open conn.DefaultDatabase = "Oracle" cmd.ActiveConnection = conn cmd.CommandText = "ALTER TABLE mytbal ADD new_column INTEGER" cmd.Execute Set cmd = Nothing conn.Close

Via ODBC
Pass connection information to the SQLDriverConnect method, which includes the passthru parameter. For Windows Platforms: Via the Microsoft ODBC Data Source Administrator, with the Attunity ODBC connection wizard. Create or edit a DSN then select the Batch update passthru check box in the Remote Server Binding Page.

8-20 AIS User Guide and Reference

See also: Creating an ODBC Connection.


Figure 88 ODBC Data Source Administrator

Example
ConnectString = "DRIVER=Attunity Connect Driver; DefTdpName=ORACLE;Binding=nav;Passthru=1;"

Passthru Queries as Part of an SQL Statement


Both retrieval statements and non-returning rowset statements (such as DDL) can be passed directly to the data source as part of the SQL syntax. To bypass the Query Processor, include the query that you want to send directly to the data source in double braces ({{query}}), prefixed with the keyword TEXT=. Prefix all table names with the data source name specified in the binding configuration. For retrieval queries, the passthru syntax is part of the FROM clause of a SELECT statement. Examples A non-returning result:
oracle:TEXT={{CREATE TABLE employee (emp_num number(5) NOT NULL, emp_name varchar2(32))}}

As part of a SELECT statement:


SELECT * FROM disam:nation, disam1:TEXT={{SELECT * FROM customer WHERE c_nationkey = ? AND c_custkey = ?}} (7,100)

where disam and disam1 are data sources defined in the binding configuration. The query to the disam1 database is passed directly to the database, bypassing the Query Processor. Note the use of parameters in the example. You can also

Using SQL 8-21

specify parameters in a non-returning rowset query (see Passthru Query Statements (bypassing Query Processing) for the syntax). Standard ANSI 92 SQL has been extended so that expressions involving columns of tables that appeared previously in the FROM list are used (such as from the nation table in the above example) HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query.

Writing Queries Using SQL


Use standard SQL to access both relational and non-relational data. For example, a user on a PC can issue an SQL join across IMS/DB and VSAM, as well as across relational databases such as Oracle or Sybase. SQL extensions have the following rules and limitations:

You cannot use reserved keywords in SQL for table and column names (see Reserved Keywords). The table below displays SQL query size limitations:
SQL Query Size Limitations Maximum Length

Table 81 Limitation

Length of an identifier (table or column name) 64 Length of a string1 Number of parameters in a query Level of nesting of function calls Level of nesting of nested queries Length of a LIKE mask operand Length of a comment line
1

350 50 20 10 255 350

The length can be modified in one of the following ways: By specifying a value for the tokenSize parameter within the <queryProcessor> group in the Attunity Server environment. Specify a question mark (?) instead of the string. When prompted for the data type of the value specify C (for cstring).

Comments can be included as part of the SQL. Comments are bounded by /* and */. If a comment is greater than the limit of 350 characters, break it up over a number of lines. Quotation marks ("") or square brackets ([]) can be used to quote identifiers such as table names and column names.

Writing Efficient SQL


The following tips can improve the efficiency of queries:

Use views and stored query procedures (CREATE VIEW Statement and CREATE PROCEDURE Statement). Use forward-only and read-only cursors. Batch SQL statements together in a single statement. For details, see below.

8-22 AIS User Guide and Reference

Use the HINT clause, specifying the optimization strategy you want used for the query (see Attunity SQL Syntax).

You can test connections and SQL interactively using NAV_UTIL EXECUTE.

Locking Considerations
This section includes the following topics:

Locking Modes ODBC Locking Considerations ADO Locking Considerations

Locking Modes
SQL UPDATE and DELETE statements are automatically executed in pessimistic locking mode. With SQL SELECT statements, the following locking modes are supported:

Optimistic Locking Pessimistic Locking No Locking

With chapters, child rowsets cannot be updated if the parent rowset is locked. Refer to the specific driver for additional locking information.

Optimistic Locking
With optimistic locking, records are locked just before an update operation. Before locking the row, Attunity Connect checks that another user hasn't changed the specified data. Optimistic locking has the following advantages over pessimistic locking:

Performance is improved when you use optimistic locking. Concurrency is ensured.

Pessimistic Locking
With pessimistic locking, records are locked as they are read. Pessimistic locking is slower than optimistic locking. When connecting to data via Microsoft Jet or SQL Server, you must open a transaction before issuing the query.

No Locking
With no locking, records are not locked and are read-only.

ODBC Locking Considerations


Locking is established using SQLSetStmtOptions with fOption = SQL_ CONCURRENCY specified. The following locking values are available for vParam:

Using SQL 8-23

SQL_CONCUR_VALUES: Optimistic locking (the locking itself is done during SQLSetPos if fLock = SQL_LOCK_EXCLUSIVE is specified). SQL_CONCUR_ROWVER: This is treated as if you specified SQL_CONCUR_VALUES.

SQL_CONCUR_LOCK: Pessimistic locking. SQL_CONCUR_READ_ONLY: Read-only mode. This is the default value.

Example To enable pessimistic locking:


SQLSetStmtOption(hstmt, SQL_CONCURRENCY, SQL_CONCUR_LOCK)

ADO Locking Considerations


Locking is established using LockType property of the Recordset object. The following locking values are available:

adLockReadOnly (default): Read-only mode. adLockPessimistic: Pessimistic locking. adLockOptimistic: Optimistic locking. adLockBatchOptimistic: Optimistic batch updates. Required for batch update mode as opposed to immediate update mode.

These locking values should be used with ADO version 2.1 and higher.

Managing the Execution of Queries over Large Tables


Query governing enables you to manage the way queries are executed. Query governing parameters are defined at the workspace levels. All defined restrictions apply to all queries for all data sources that require Attunity metadata and which are defined in the binding associated with the workspace. Query governing is defined in General tab of the Workspace editor in Attunity Studio. For more information, see the information on the General in Editing a Workspace. The workspace governing parameters only apply to data sources that require Attunity metadata.

8-24 AIS User Guide and Reference

Figure 89 Query Governing Restrictions

The following workspace governing parameters can be defined:

Max Number of Row in a Table That Can Be Read: This parameter restricts the number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error. Max Number of Rows Allowed in a Table Before Scan is Rejected: This parameter restricts the number of table rows that can be scanned. This parameter impacts on the query both during query optimization and execution, as follows.

During query optimization: The value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). During query execution: A scan is limited to the value set. When the number of rows scanned exceeds the number stated, the query returns an error.

You must refresh the daemon as well as reloading the configuration after changing values in the WS Governing tab.

Optimizing Outer Joins


Attunitys query optimizer delegates queries or parts of them directly to the database on the backend when possible, to improve execution performance. Outer joins (both LOJ and ROJ), are sent to the database directly by the query optimizer whenever possible, so that the query processor (QP) does not have to carry out the join. This increases the performance speed. This section includes the following topics:

Limitations Query Optimization

Using SQL 8-25

Property

Limitations
To optimize a query and select the largest part of a query that can be sent to the backend, the optimizer reorders the tables joined. The optimizer follows the following Outer join rules when reordering the tables to maintain outer join semantics:

Tables on the right side of a left outer join cannot be moved up the LOJ or to the LOJs left side. Tables from the left side of a left outer join that are not included in the outer join condition can be moved and joined to the outer join result. Tables from the left side of a left outer join that are included in the outer join condition cannot be joined to the outer join result.

Query Optimization
The query optimizer will attempt to send the greatest part of a query possible to the database on the backend instead of to the query processor. This can be achieved easily on joins that are executed on tables from the same database. Attunity Connect can execute outer joins on two or more databases, however these queries must be made by the Attunity query processor.

Property
The property noLojDelegation is set to false by default, which indicates that the Optimizer tries to delegate outer joins to the backend database. When this property is set to true, outer joins are always performed by the Attunity query processor even if part of the LOJs can be delegated to the backend. This is helpful for troubleshooting problems with outer join optimization or to preserve legacy optimization.

Changing the Property


You set the noLojDelegation property in Attunity Studio, under the binding configurations. This property is in the Optimization category. For more information, see the AIS reference section for bindings. To change the noLojDelegagion property 1. Open the Design Perspective in Attunity Studio.
2. 3. 4.

From the Configuration view on the left side, expand the machine folder. Expand the machine with the binding that has the optimizer settings you want to change. Right-click the binding that has the optimizer settings you want to change and select Edit Binding. The Binding editor opens on the right of the screen with the Properties tab open.

5. 6. 7.

From the Environment Properties list, expand Optimizer. Click the right side of the Value column for the noLojDelegaton property and select true from the drop-down list. Save the changes.

8-26 AIS User Guide and Reference

9
Working with Web Services
This section contains the following topics:

Web Services Overview Preparing to use Web Services Deploying an Adapter as a Web Service Undeploying Web Services Viewing Web Services Logging Web Service Activities

Web Services Overview


A Web service is a standard way for accessing Web-based applications. It used the XML, SOAP, and WDSL standards. XML us used to tag the data, SOAP to transfer it, and WDSL to describe the various services available. Web services lets applications from different sources to communicate with each other. Web services are not platform dependant, which means that any client computer can access the same Web-based applications no matter what platform the client is running. Any AIS application adapter using one or more of its interactions can be deployed as a Web service.

Preparing to use Web Services


This section explains how to set up your system so that Attunity Studio can work with Web services. It contains the following topics:

Web Services Prerequisites Setting up Attunity Studio to Work with Web Services

Web Services Prerequisites


Make sure that your system has the following:

Apache Tomcat Web server, version 4.1 The axis.war file JDK 1.4. You must make sure that this full Java SDK is installed. Apache Tomcat will not work if only the Java Runtime Environment (JRE) is installed.

Setting up Attunity Studio to Work with Web Services


To enable Attunity Studio to work with Web services, carry out the following tasks:
Working with Web Services 9-1

Install JDK version 1.4x on your computer. Apache Tomcat will not work with the Java Runtime Environment (JRE) only. You can download JDK 1.4 from http://tomcat.apache.org/download-41.cgi. Install Apache Tomcat version 4.1. Attunity Studio will only work with this version of Tomcat. Download Apache Tomcat from http://tomcat.apache.org/download-41.cgi. It is best to use the zip file. Extract the contents of the zip file to a folder on your main root drive, such as program files.

Copy the axis.war file to the webapps folder of the Tomcat root folder. For example, C:\Program Files\Apache\tomcat\webapps. This file should be supplied with your Attunity Studio installation. If not, you can download it from the Apache site. Run startup.bat from the tomcat\bin folder. Depending on where you installed your Apache Tomcat, the actual path will look something like C:\Program Files\Apache Software Foundation\Tomcat 4.1\apache-tomcat-4.1.34-LE-jdk14\bin. Test that Tomcat is running. Open a browser and enter the following URL: http:\\localhost:port 8080. If you changed your Tomcat default port to another number, for example, 80, enter that number instead of 8080.

Deploying an Adapter as a Web Service


Setting an adapter as a Web service involves the following:

Provide the connection information for the Apache AXIS servlet. Apache AXIS is an implementation of SOAP (Simple Object Access Protocol). Define the Web service and select the interactions to include in the service.

1. 2. 3. 4. 5.

To deploy an adapter as a Web service Open Attunity Studio, from the windows Start menu, select Programs, then select Attunity and then click Attunity Studio. Select the Design perspective. In the Configuration manager, expand the machine with the adapter that you want to set as a Web service. Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to deploy as a Web service and find Web Services and then select Deploy. The Web Service Deployment wizard opens. This wizard has the following steps:

Connection Information for the Axis Servlet Define a new Web Service for an Adapter Select the Interactions Summary Window

9-2 Attunity Integration Suite Service Manual

Connection Information for the Axis Servlet


In this step, you enter information that explains where to connect to the Axis servlet. The following figure shows where to enter this information.
Figure 91 Axis Servlet Connection Information

The following describes the information entered in this step:


Host: Enter the name of the server hosting the axis servlet. Enter the port information:

Port: The port used by the axis servlet. If your Apache Tomcat Web server is defined as port 8080, select the Use default port check box. Axis path: The path to the Axis servlet. In you copied the servlet into the Apache Tomcat webapp folder, you can use the default setting.

Enter the username and password to access the axis servlet. If you are using an anonymous connection, select the Anonymous connection check box.

Click Next. The Define a new Web Service for an Adapter step is displayed.

Define a new Web Service for an Adapter


In this step, you enter information that defines the Web service for the adapter. The following figure shows where to enter this information

Working with Web Services

9-3

Figure 92 Web Service Definition

The following describes the information entered in this step:

Name: Enter a name for the Web service. Each Web service must have a unique name that has only letters, digits and underscores. To reuse a name, you must undeploy the Web service with this name before assigning the name to the new Web service. See Undeploying Web Services.

Description: Enter a general description of this Web service. This is optional. Namespace: Enter a name for the namespace, if you want it to have a unique name. Select the Namespace enabled check box if you want to enable and use this namespace. If you do not enter a namespace, the namespace is created from the host name and Web service name.

Enter the username and password for users who can access the adapter at runtime. If you are using an anonymous connection, select the Anonymous connection check box. Click Advanced Options if you want to use any of the advanced options. The Advanced Options settings has the following tabs:

The General Tab The Pooling Tab The Map Tab

Click Next. The Select the Interactions step is displayed.

9-4 Attunity Integration Suite Service Manual

The General Tab


This tab lets you configure some advanced configuration options. The following figure shows the information that is entered in this tab.
Figure 93 The General Tab

Enter the following information in this tab:


Timeout (sec): The Web server timeout in seconds. Firewall protocol: Select one of the following: None: To use no protocol. FixedNat: Select this if the machine you are working with has a fixed Network Address Translation (meaning that it is always connected to the same external IP address when accessing the Internet.

Trace enabled: Select this to enable tracing the communication between AIS and the Web service. If enabled, specify a path and name for the trace file. Encryption: Select this if you are using encryption. Enter the information for the encryption used.

When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The Pooling Tab or The Map Tab to enter information there.

The Pooling Tab


This tab lets you configure some advanced pooling options. The following figure shows the information that is entered in this tab.

Working with Web Services

9-5

Figure 94 The Pooling Tab

Enter the following information in this tab:

Maximum active connections: The maximum number of connections that can be active at the same time. Maximum idle connections: The maximum number of connections that can be available in the pool. Behavior when a connection is not available: Select one of the following to indicate what behavior you want if a connection is not available: Fail: The interaction fails Grow: Create a new connection Block: Block the interaction

Time between idle connection examination runs (msec): The time (in milliseconds) to wait before checking if any connections are idle. Minimum idle time before removed from pool (msec): The minimum amount of time (in milliseconds) a connection can be idle before being closed.

When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Map Tab to enter information there.

The Map Tab


This tab is used to map additional parameters for the connection. The following figure shows this tab:

9-6 Attunity Integration Suite Service Manual

Figure 95 The Map Tab

New connection parameters are entered as a key. For example, to open a new server for each operation, create a new key called newserver and enter 1 in the Value column. To enter a new key Right-click in the Key column and select Add. Type in a name for the key. Click in the Value column in the same row as the key to enter a value.

1. 2. 3.

To delete a key, right click on the key you want to delete and select Delete. When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Pooling Tab to enter information there.

Select the Interactions


In this step you select the interactions that you want to include in the Web service. This step has two columns. All interactions that are defined for the adapter are shown in the left column. For more information on adding interactions to your adapter, see Working with Application Adapter Metadata. The following figure shows this step.

Working with Web Services

9-7

Figure 96 Select the Interactions

You can add interactions to include in the Web service, and also remove them from the Web service. To add or remove Web services, select the Web service and click on one of the buttons described in the following table.
Button Description Select an interaction from the left column and click this button to move it into the right column and include it in the Web service. Click this button to move all interactions from the left column to the right column and include all of the interactions in the Web service. Select an interaction from the right column and click this button to move it into the left column and remove it from the Web service. Click this button to move all interactions from the right column to the left column and remove all of the interactions from the Web service.

When you are finished selecting interactions to include in the Web service, click Next. A Summary Window showing the Web service settings is displayed.

Summary Window
The summary window shows you the configurations that you entered in each step of the wizard. Look over the information in this window to be sure it is correct. Click Back to return to any previous step and edit the information, if necessary.

9-8 Attunity Integration Suite Service Manual

When you are sure all the information is correct, click Finish to deploy the Web service.

Undeploying Web Services


Web services can be removed either for all the adapters set as Web services in a binding configuration, or for a specific adapter. To undeploy an adapter as a Web service: 1. In the Design perspective Configuration manager, expand the machine with the adapter with the deployed the Web service you want to undeploy.
2. 3.

Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to undeploy as a Web service and find Web services, then select Undeploy. The Web Service Undeployment wizard opens.

4. 5.

Enter the Connection Information for the Axis Servlet. Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it.

Figure 97 Undeploy Web Services

6. 7.

Select the Web services you want to remove. Click Finish to undeploy the selected Web services.

Viewing Web Services


Web services and the interactions associated with them can be viewed in Attunity Studio. To view a list of Web services and the interactions associated with them 1. In the Design perspective Configuration manager, expand the machine with the adapter where you deployed the Web service.
2.

Expand the Bindings folder and then expand the binding with the adapter.
Working with Web Services 9-9

3.

Right-click the adapter with the Web services and interactions you want to view, and find Web services, then select List. The Web Service Undeployment wizard opens.

4.

Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it. This is the same wizard that is displayed in when Undeploying Web Services, however you can only view the Web services.

Logging Web Service Activities


Web Service activities are logged to the Web server logs. If using Tomcat, the logs can are located in the Tomcat log folder in the Tomcat root folder. An example of the possible location for this file is: Program Files\Apache Group\Tomcat 4.1\logs The following logging information can be modified:

The log file location. The level of error messages written to the log. The order of the error messages in the log.

Changes to the log file are set in log4j.properties which is located in: WebserverRoot\WebApps\Axis\Web-INF\classes\log4j.properties Where WebserverRoot is where the Web server is installed, for example: C:\Program Files\Apache Group\Tomcat 4.1\Webapps\axis\ Web-INF\classes\log4j.properties

Changing the Log File Location


To change the log file location, enter the location of the log file in the logfile attribute: log4j.appender.LOGFILE.File=log_file_location For example: log4j.appender.LOGFILE.File = c:\\Webservices\\log\\ connect-Webservices.log.

Changing the Error Message Level


The following error types are available:

DEBUG INFO WARN ERROR FATAL

Enter the error type as shown in the example below: log4j.logger.com.attunity.connect=error_type, LOGFILE

9-10 Attunity Integration Suite Service Manual

Changing the Error Message Format


To change error message format, enter the formatting pattern as shown in the example below: log4j.appender.LOGFILE.layout.ConversionPattern=formatting_ pattern The following is an example of the output:
2000-09-07 14:07:41,508 [main] INFO MyApp - Entering application. 2000-09-07 14:07:41,529 [main] INFO MyApp - Exiting application.

Working with Web Services 9-11

9-12 Attunity Integration Suite Service Manual

Part II
Attunity Connect
This part contains the following topics:

Introduction to Attunity Connect Attunity Integration Suite Architecture Flows Implementing a Data Access Solution Setting up Data Sources and Events with Attunity Studio Implementing an Application Access Solution Setting Up Adapters Application Adapter Definition

10
Introduction to Attunity Connect
This section contains the following topics:

Overview of Attunity Connect Logical Architecture System Components and Concepts

Overview of Attunity Connect


Attunity Connect is a suite of pre-built adapters to mainframe and enterprise data sources and applications. Attunity Connect resides natively on the data server to provide standard, service-oriented integration (SQL, XML, Web Services) to a broad list of data sources and applications on platforms ranging from Windows and UNIX to HP NonStop and z/OS. With robust support for metadata, bi-directional read/write access and transaction management, Attunity Connect simplifies and reduces the cost of legacy integration.

Logical Architecture
(including diagram/s and what runs on each of the machines, ties in to next section)

Backend Data Server Client Application Client Workstation (Studio)

System Components and Concepts


At the heart of the Attunity Connect integration solution are the Attunity engines. These shared, multi-platform engines manage access and updates across the IT infrastructure. They can also integrate information from disparate systems with a single request. The Attunity engines can receive requests from a wide variety of supported APIs. The relevant engine processes the requests interacting with applications and data sources via Attunity Connect application adapters and data source drivers and returns the results, again via the interfaces. Attunity Connect buffers the user from the applications and data sources requested, so that it appears as one consistent API and data model, regardless of the source. The movement of information from any of the supported applications and data sources to the consumer is completely transparent to the consumer application.
Introduction to Attunity Connect 10-1

The engines provide comprehensive transaction support to the extent allowed by the sources support for two-phase commit functions. AIS installations include the following engines:

Data Engine
The data engine accesses, updates, and joins enterprise information from data sources as if they were all relational databases. At the same time, it takes advantage of its query optimizer to determine the fastest way to carry out these tasks, minimizing the load on IT resources, networks, and systems. Because the data engine uses a relational model, it normalizes the data, converting hierarchical structures into tables without redundant data. By combining the relational model with the SQL language, the data engine allows applications to issue the same complex query to multiple data sources without tuning it to each target source. The relational approach also simplifies access via commercial tools and applications that interoperate with relational sources. Clients can use industry-standard JDBC, ODBC, ADO/OLE DB and .NET interfaces to submit SQL requests to the data engine. By using either the Database or Query application adapter, you can also use JCA, XML or COM as the client interface. When the data engine receives and parses an SQL request, it first determines which data source is involved, where the data resides, and how the source handles data. The data engine determines how to carry out the process based on metadata that it retrieves from a local cache, from the repository, or dynamically from the backend data sources. Then, the data engine generates a query execution plan in the form of a tree. Whenever possible the data engine passes the entire request to the underlying data source. In this case, the engine translates between standard ANSI SQL 92 and the underlying databases SQL dialect. The data engine can also accept pass-through queries to nonstandard SQL functions supported by the target source. If a data source offers limited SQL capabilities, the data engine implements missing functions as needed. If the data source offers no SQL capabilities at all, the data engine breaks the request into simple retrieval operations that an indexed or sequential table can read.

Query Optimizer
The data engine includes the query optimizer, which minimizes execution time and resource consumption. The query optimizer enhances the data engine's initial query execution plan based on the query structure, network structures, the target data sources capabilities and locations, and the statistical information available for each table. To maximize the efficiency of query execution, the query optimizer uses various caching and access techniques, including read-ahead, parallelism, and lookup-, hash-, and semi-joins. It flattens views, breaks out and propagates simple predicates down the tree, reorders joins, directs join strategies, selects indexes, and performs other related tasks. If the target data source is distributed across multiple machines, the data engine and query optimizer together generate a distributed execution plan that minimizes network traffic and round-trips. Performance Tuning Tools Database administrators can review and control the optimization strategies that the optimizer uses. Using the query analyzer, IT staff can monitor accumulated statistics and heuristic information to evaluate the success of the
10-2 AIS User Guide and Reference

optimization strategies. These tools enable users to evaluate and understand the way specific queries work by specifying hints, flags, optimization goals (first-row or all-rows optimization), and other query properties, such as requests for scrollable or updateable cursors.

Data Sources
Native data source drivers utilize the native API of each data source. Attunity Connect tailors these drivers to the individual data model and performance characteristics of the particular data source. These source-specific drivers share common logic. These drivers deal with metadata, describing the information offered by the data sources and mapping the underlying data model and functionality into relational or ISAM (Index Sequential Access Methods) models. Drivers for nonrelational data sources support sequential and indexed access, array structures and hierarchical structures, and other functions without normalization or other time-consuming operations. Drivers fall into one of the following classes:

RDBMS drivers to access data providers that support some dialect of SQL Non-relational drivers to access data providers that do not support SQL File system drivers to access files Procedure drivers to access program functionality

The functionality within each class can be factored further. For RDBMS drivers, Attunity Connect needs to be aware of the syntactic and semantic nuances of the SQL dialect for each specific data source, of the library functions supported by the data source, and other details. Similarly, a file system may or may not support functionality such as indexes, BLOBs and embedded rowsets, or be capable of efficiently executing single-table predicates (filters). For relational data sources not supported by Attunity Connect through a custom driver, Attunity Connect provides generic ODBC and OLE DB gateways. A syntax file (NAV.SYN) encapsulates backend peculiarities and facilitates the connection of the new data source to Attunity Connect using one of these generic drivers. The following data sources are supported:
Table 101 Relational DB2 Informix Supported Data Sources Non-relational Adabas CISAM File System Flat Files ODBC OLEDB-FS (Flat File System) OLEDB-SQL (Relational) Text-Delimited Procedures Procedure IMS/TM Natural CICS Procedure (Application Connector) CICS Procedure

Ingres II (Open Ingres) DBMS (OpenVMS only) Oracle 8 DISAM

Oracle Rdb (OpenVMS only) SQL/MP (HP NonStop only) SQL Server (Windows only)

Enscribe (HP NonStop only) IMS/DB (z/OS only) RMS (OpenVMS only)

Introduction to Attunity Connect 10-3

Table 101 (Cont.) Supported Data Sources Relational Sybase Non-relational VSAM under CICS and VSAM (z/OS only) File System Procedures

In addition, procedure drivers are provided to enable access to program functionality. These procedures include a generic procedure driver and specific drivers for CICS, IMS/TM, and Natural programs on a z/OS platform.

Interfaces and APIs


The following standard APIs are supported for accessing data:

JDBC Client Interface A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ADO Client Interface An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all available data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms. .NET Client Interface ADO.NET is the data-access component of the Microsoft.NET Framework. AIS supports all ADO.NET objects, methods and properties as well as additional .NET Data Provider classes.

Transaction Support
To ensure the integrity of simultaneous updates to multiple data sources, Attunity Connect supports distributed transactions in the following ways:

As a distributed transaction manager, for safe, reliable multi-server transactions. As an OLE Transaction resource manager, enabling all AIS-enabled data sources to participate in distributed Microsoft MTS transactions using the OLE Transactions protocol. As an XA resource manager. Using the JDBC 2.x Interface, so that all AIS-enabled data sources can participate in distributed J2EE transactions under a J2EE application server.

A transactions log file backs up two-phase-commit (2PC) operations in the case of a failure to recover transactions. Generally, the ability to support distributed transactions depends on the capabilities of the data sources that participate in the transaction. Attunity Connect cannot guarantee data integrity for AIS-enabled data sources that do not support the 2PC protocol. However, Attunity Connect does employ various optimizations to extend the coverage of 2PC support. For example, Attunity Connect can execute a 2PC process as long it involves a maximum of one 1PC data source while all the other data source support 2PC.
10-4 AIS User Guide and Reference

Application Engine
The application engine provides an enterprise application integration (EAI) model that enables any kind of application to interact with applications and data sources via their own native interfaces. On the client end, applications use industry-standard interfaces to communicate with the application engine. In turn, the application engine communicates with specific adapters that access enterprise and application data on the server end. An XML-based schema supports precise application mapping. As a result, the application engine opens legacy applications of all types to integration with each other and with cutting-edge technologies such as Java and XML. Because the application engine uses an EAI model, interactions with the source application or data are precise and predictable. Requesting applications can specify exactly how interactions occur. Moreover, the application engine maps data structures faithfully, facilitating access to familiar applications within new environments. The EAI model is particularly suitable for deployment over the Internet using protocols such as TCP/IP and HTTP because it allows for both stateful and stateless interactions and can batch requests to minimize network traffic. A distinctive feature of the application engine is its ability to translate application structures, such as those typical of legacy COBOL applications, to and from XML. Web and other solutions can use the application engine to interact with both applications and data sources through a growing number of XML-based tools, as well as other application-oriented frameworks such as Suns J2EE JCA (J2EE Connectors Architecture) and Microsoft .NETs SOAP-based interfaces.

Application Adapters
With the AIS generic application adapter, developers can incorporate applications into Attunity Connect solutions. They can wrap and leverage existing application-specific business logic, protect data integrity by writing to a data source through its original application, and trigger operational tasks, all with a single adapter that runs consistently across diverse platforms. The following application adapters are provided:

CICS (on z/OS platforms) COM applications (on Windows platforms) Database and Query adapters access to any supported data source via XML, JCA, COM or .NET. IMS/TM (on z/OS platforms) Legacy applications Pathways (on HP NonStop Himalaya platforms) Tuxedo (on Windows and UNIX platforms)

Interfaces and APIs


The following APIs are supported for accessing applications:

XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is exceptionally well tailored for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all the platforms where AIS runs. On other

Introduction to Attunity Connect 10-5

platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP.

JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards. It provides a robust, efficient, and reliable way to integrate Java applications with native applications. COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface: A .NET component, called NetACX, that enables access to application adapters from any .NET-based application.

Events
AIS handles events via an event queue. The event queue is defined as an adapter in Attunity Studio where interactions are expected as events. The event queue itself is managed by a dedicated server process, which is set up by defining an events workspace in the daemon definition.

Attunity Server
Attunity Server is the server component that includes all the software required to enable AIS to run. AIS has a native installation wizard on each of the supported platforms, simplifying deployment. It takes no special skills to install AIS, so IT staff can set up the AIS infrastructure by applying only their platform- and application-specific expertise.

Attunity Studio
Attunity Studio is the configuration tool for AIS. Configuration using Attunity Studio is performed on a Windows platform. The configuration information is stored in the AIS repository on the backend system.

Design Time
Attunity Studio is used during design time to perform the following configuration tasks:

Set up access to machines running AIS Configure the daemon that is responsible for managing communication between AIS machines Configure metadata for both data sources and adapters. For all relational data sources and some non-relational data sources, Attunity Connect uses the native metadata. Otherwise, the metadata is specified to Attunity. See Working with Metadata in Attunity Studio for more information.

Runtime
Attunity Studio is used during runtime to perform the following management task:

Modify the configuration settings Manage daemons and workspaces during runtime (see Runtime Management with Attunity Studio)

10-6 AIS User Guide and Reference

Metadata Repository
AIS supports an XML-based schema. The schema and the AIS configuration are stored on the server file system and represent the repository. There is a single main repository for each installation of AIS. The repository maintains server-wide definitions (such as the daemon configuration) and application adapter definitions. There is also a repository for each data source which uses Attunity metadata. These repositories are optimized for fast run-time access. These repositories are not restricted by native operating system file naming conventions.

System Repository
The system repository is used for information that is general to AIS, such as:

Binding information, including the names of configured backend adapters and drivers and environment settings. Daemon definitions, to control client-server communication. User profiles, enabling single sign-on to multiple backend applications and data sources. Information used directly by the Query Processor. An adapter definition for each adapter defined. For each adapter, this includes a list of its interactions and the input and output structures that are used by these interactions.

The system repository is called SYS.

Data Source Repositories


Each data source can have a repository. For data sources that do not have native built-in metadata information (such as nonrelational data sources), Attunity has developed a proprietary metadata mechanism that enables users to describe the data in a standard way. The information in the repository includes:

Attunity metadata for non-relational data sources, files and Attunity Connect procedures, extensions of the native metadata of the data source (the extended metadata feature), and a snapshot of native metadata for better performance (the local copy feature). Synonym definitions for some of the data sources.

Attunity Configuration Model


Attunity Connect takes advantage of daemons, client software, and server software to support seamless operations across both local and remote distributed environments.

Daemons
A daemon, called IRPCD, runs on every machine running AIS and manages communication between machines running AIS. The daemon is responsible for allocating server processes to clients. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace and provides clients with the required servers. When a client requests a connection, the daemon allocates a server process (or where applicable, a thread) to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further

Introduction to Attunity Connect 10-7

communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains available for use by another client. The daemon supports multiple server configurations called workspaces. Each workspace defines accessible data sources, applications, environment settings, security requirements, and server allocation rules. The allocation of servers by the daemon is based on the workspace that the client uses to access the data source. Thus, a client can access a data source via one workspace, where a server process is allocated from an existing pool of servers, or the client can access a data source via a different workspace, where a new server process is allocated for each client request. A fail-safe mechanism allows the specification of alternate daemons, which function as a standby for high availability.

Client Communication Software


Within Attunity Connects symmetrical operation, clients serve as agents that request remotely located data. Client software, which includes one or more application-specific interfaces, resides on every system that needs to interact with data. To the calling application, clients look like local data providers. They receive requests for data and metadata and either execute those requests locally or dispatch them to an appropriate server. The communication protocol minimizes processing and traffic by negotiating data formats (for example, when similar systems talk, there is no need to switch data formats) and by avoiding repeated transmission of the same data over the network. The communication subsystem handles machine-dependent transformations such as big/little endian translations, floating point format translations, single- and multi-byte character encoding translations, etc. Attunity Connect dynamically determines these translations upon initial connection between a client and a server, based on the nature of the parties involved in the connection. Clients maintain caches of data and metadata which enable it to satisfy many requests locally without needing to go to the server. They also batch some commands in order to avoid unnecessary network traffic. Upon the first remote request by a particular user session, a client starts a corresponding session for this user on one or more servers. Each server session remains open until the client session terminates. In the case of systems using connection pooling (such as MTS or IIS), the client and server sessions may stay open indefinitely. In the event that server operations terminate (for example, someone restarts the server machine or communication is lost), the client automatically reestablishes the connection upon the next remote operation.

Server Communication Software


Isolating different server configurations increases readability and flexibility. These configurations enable solutions to serve different clients, including classic two-tier client-server applications, three-tier applications with connection pooling, and ad-hoc usage. Each configuration specifies the following:

Processing mode, specifying multi-threaded/single-threaded operations, the number of processes, and server pools. Security settings for impersonation, authorized users, administrators, and encryption.

10-8 AIS User Guide and Reference

Accessible adapters. Various operational parameters.

AIS also supports multi-version interoperability. IT teams can simultaneously install multiple versions of AIS on all supported operating systems. As a result, organizations can begin to deploy new software versions in a staged update process, without interrupting the operations of previous versions. At the provider end of the system, servers act as agents that access, read, manipulate, and write to data sources and applications. Servers accept commands from clients, call the corresponding local functions, and package and return the results to the clients.
Note:

In the event that a target data source actually resides on another machine, the source is represented by an agent on the server using a third-party communications component such as Oracle Net Servers or Sybase CT-Lib, which is transparent to Attunity Connect.

Server software, which includes the application and data engines, drivers and adapters, resides on every AIS machine. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains for use by another client. This kind of server model is very flexible. It accommodates different operating systems and data source requirements. AIS supports several server models:

The multi-threaded model is effective when the data sources support multi-threading (on Windows platforms only). Serialized multi-client server processes are useful for short requests and for data sources that allow more than one simultaneous client per process. The single-client-per-process model supports data sources that only handle one client per process and to maximize client isolation.

Server processes can be reused. Various server process pooling options allow organizations to tune the solution for different application and load requirements.

Introduction to Attunity Connect 10-9

10-10 AIS User Guide and Reference

11
Attunity Integration Suite Architecture Flows
This section has the following sections:

Overview Data Source Architecture Application Adapter Database and Query Adapter Change Data Capture (CDC) Flow

Overview
This section provides architectural drawings that show the basic flows used when you deploy AIS. AIS provides solutions for integrating data between data sources and through applications. In addition, you can also work with changed data captures (CDC). The basic AIS flows are:

Integration using a relational data source Integration using a non-relational data source (file system) Integration using generic database or query adapters Application-based integration Change Data Capture

Data Source Architecture


There are two types of data source flows. Those using a Data Source and those using a Query Engine Flow.

Data Source Query Engine Flow

Attunity Integration Suite Architecture Flows 11-1

Data Source
The following figure shows the flow architecture for integrating data with relational data sources.
Figure 111 Relational Data Source Flow

This table shows the flow for data integration from the client Machine to the Server Machine. The client side shows the APIs used for Java and other applications. The following table describes the parts in the relational data v flow. For more information on working with data access, see Implementing a Data Access Solution.
Table 111 Part Client Platform Java-based applications: Consumer Application in Java JDBC API NAV API Non-Java applications Consumer Application Data API An application written in Java. This is a Java application being run by the user on the client machine. The Java programming interface that allows external access to SQL queries. The standard programming interface for AIS. Any application not written in Java. This is an application being run by the user on the client machine. An interface that supports data and allows SQL queries. Relational Data Flow Parts Description

11-2 AIS User Guide and Reference

Table 111 (Cont.) Relational Data Flow Parts Part NAV API Query Engine and Optimizer Description The standard programming interface for AIS. The query engine parses the SQL and creates a statement that is read by multiple data sources, if necessary. The query optimizer refines the SQL statement so that it is most efficient and takes the least amount of time to send and receive the results. The way that a statement is handled by AIS, depends on the data source types that are queried. AIS queries the data for relational and non-relational data sources differently. For a more detailed explanation of this part of the data flow, see Query Engine Flow. Server Platform IRPCD (Daemon) Service Loop Driver This is the process that manages communication between all the machines in the flow. The service loop is the following, get request, handle request, revise request and continually repeat the process. The connecting driver for your data source. Attunity supplies many data source drivers. For more information, see the Data Source Reference. The Attunity data driver API. For more information, see The Attunity Connect Developer SDK. A driver created for a specific data source that is not one of the drivers supplied with AIS. The API for the database on the backend and the actual database used in this flow. The code on this level belongs to the database.

GDB API Custom Driver Database API and Database

Query Engine Flow


AIS handles requests differently depending on whether you are using relational data sources, non-relational data sources, or both types. Attunity allows you to make SQL queries to any type of database. The following sections show how the request is handled by the Query Processor for each type of request.

Making a Request between Two Relational Databases Making a Request between Two Non-Relational Databases Making a Request between a Relational Data Source and a Non-Relational Data Source

Attunity Integration Suite Architecture Flows 11-3

Making a Request between Two Relational Databases


The following figure shows a tree that traces an SQL query on two Oracle (relational) databases. In this case, the SQL statement requests an LOJ on the two Oracle tables. The query engine and optimizer carry out the request and return a result set based on the table columns in the statement.
Figure 112 Two Relational Data Sources

11-4 AIS User Guide and Reference

Making a Request between Two Non-Relational Databases


The following figure shows a tree that traces an SQL query on two DISAM (non-relational) data sources. In this case, the SQL statement requests a LOJ on the two DISAM files. The query engine in this case splits into two branches. The first branch looks up the information in the file. The second branch uses the information in the SQL statement to carry out a filtering action and then looks up the information on the second file according to the filters rules.
Figure 113 Two Non-Relational Data Sources

Making a Request between a Relational Data Source and a Non-Relational Data Source
The following figure shows a tree that traces an SQL query on a DISAM (non-relational) data source and an Oracle (relational) data source. In this case, the SQL statement requests an LOJ on the DISAM file and the Oracle table. The query engine splits into two branches. The first branch looks up the information in the DISAM file. The second branch uses the information in the SQL statement to carry out a filtering action and then looks up the information in the table using the filtering rules to find the correct columns.

Attunity Integration Suite Architecture Flows 11-5

Application Adapter
The following figure shows the flow architecture for integrating data between applications.
Figure 114 Application Flow

The following table describes the parts of the application flow. The client side is always a thin client and contains the application. The server side contains the main AIS installation and the necessary adapters. For more information on using application access, see Implementing an Application Access Solution.
Table 112 Part Client Platform Consumer Application ACX Gateway ACX Client Server Platform IRPCD (Daemon) This is the process that manages communication between all the machines in the flow. This is an application being run by the user on the client machine. The application can be 3GL, JCA, COM, or XML. A gateway based on the ACX protocol. For more information on the ACX protocol, see the Attunity Connect Developer SDK. A client based on the ACX protocol. This is always a thin client. Application Flow Parts Description

11-6 AIS User Guide and Reference

Table 112 (Cont.) Application Flow Parts Part Service Loop ACX Dispatcher Adapter GAP API Custom Adapter Application and Application API Description The service loop is the following, get request, handle request, revise request and continually repeat the process. The ACX-based program that sends XML queries to the application. The adapter that is used to provide access to your application. The Attunity Application API. For more information, see The Attunity Connect Developer SDK. An adapter created for a specific application that is not one of the adapters supplied with AIS. The actual application used in this flow and its API. The code on this level belongs to the application.

Change Data Capture (CDC) Flow


The following figure shows the flow architecture for a change data capture. The color coding in the diagram indicates the parts origin (Attunity, third party, or both, depending on the CDC agent used). For more information on the parts to the flow for each CDC agent, see the CDC Agents Reference.
Figure 115 Change Data Capture Flow

The following table describes the parts of the Change Data Capture flow. A change data capture captures changes to databases and writes them to a log file. For more information, see Implementing a Change Data Capture Solution.

Attunity Integration Suite Architecture Flows 11-7

Table 113 Part Data Source

Change Data Capture Flow Parts Description The data source where the changes are made. The user application makes the changes to the data source. The tool that identifies and captures the changes from the data source. A data source that saves raw change records. An Attunity Stream component that produces Attunity Stream change records (change stream). A change provider that retrieves raw change records (raw change stream) from the change source and produces Attunity Stream change records (change stream). A change provider that provides value-added services on a change stream. This element is optional. The Change Warehouse produces a change stream. An Attunity component that provides change stream records to a change consumer. A third-party tool or application that consumes change stream records. Examples include ETL and EAI tools and messaging systems.

Change Logger Change Source Change Provider Change Agent

Change Warehouse

Change Client Change Consumer

Database and Query Adapter


The following figure shows the flow architecture when integrating data with the Query or Database Adapter.
Figure 116 Query and Database Adapter Flow

The figure above shows how the query and database adapters handle a request. The flow has two parts. The top (above the dotted line) is the client side. The client side has the application that is making a request to the adapter. The application is based on the

11-8 AIS User Guide and Reference

ACX protocol. For information on the ACX protocol, see the Attunity Connect Developer SDK. The request is sent to the adapter on the server. If you are using a database adapter, the request is phrased in a standard XML format, which is converted to an SQL query. If you are using a query adapter, the SQL is entered directly.
Note:

The database adapter is not part of the flow if you use the Query adapter.

The SQL query is sent to the query processor, which sends it to the data source. The data source returns the result of the query, which are sent back to the application. In the figure above, the left side shows the input requests and the right side shows the output results for each phase of the flow.

Attunity Integration Suite Architecture Flows 11-9

11-10 AIS User Guide and Reference

12
Implementing a Data Access Solution
This section includes the following topics:

Overview Setting Up AIS for Data Access Installing AIS Configuring the System for Data Access (Using Studio) Supported Interfaces Data Access Flow Data Source Metadata

Overview
A data access solution uses data source drivers to let you connect to a supported data source from JDBC or ODBC applications on all platforms. Attunity data sources support ADO and .NET applications on Windows platforms. The following data source drivers are provided:
Table 121 Relational DB2 Data Source Informix Data Source Ingres II (Open Ingres) Data Source Oracle Data Source Oracle RDB Data Source (OpenVMS Only) SQL/MP Data Source (HP NonStop Only) SQL Server Data Source (Windows Only) Sybase Data Source Data Source Drivers Non-relational Adabas C Data Source Generic Flat File Data Source

CISAM /DISAM Data Source ODBC Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS) under CICS and VSAM (z/OS only). OLEDB-FS (Flat File System) Data Source OLEDB-SQL (Relational) Data Source Text Delimited File Data Source

Implementing a Data Access Solution 12-1

If you are working with a data source for which Attunity does not supply a driver, Attunity supplies an SDK to allow you to develop a driver for your data source. For details, see the Attunity Developer SDK reference.

Setting Up AIS for Data Access


This section provides the general workflow when using a data access solution in Attunity Connect.
1. 2. 3. 4. 5. 6. 7. 8.

Install AIS on the Backend. The backend are the machines where the databases are located. You also install the data source drivers here. Install the Attunity Server Software. Install Attunity Studio. Configure the Machines where you installed AIS and the data sources in Attunity Studio. Configure User Profiles. Set up and Configure the Binding for the data source. Configure the Data Sources in the Binding. Set Up the Data Source Metadata.

Installing AIS
Before you begin the process, you must install the necessary components. This includes the AIS client and server, and the data source drivers, the Attunity server software, and Attunity Studio. This section shows where to install the AIS components necessary for a data access solution.

Install AIS on the Backend


The backend is where is where your data is stored. In a data access solution, you access a data source, such as an Oracle or DB2 database, to get the necessary data. You install the full (thick) version of AIS on the backend. You must also install the necessary data source drivers on the backend. For information on how to install AIS, see the installation guide for your platform.

Install the Attunity Server Software


Install the main AIS software on your main integration server machine. For information on installing the Attunity Studio server software, see the installation guide for your platform.

Install Attunity Studio


Install Attunity Studio on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide.

12-2 AIS User Guide and Reference

Configuring the System for Data Access (Using Studio)


You make all the necessary set up and configurations in Attunity Studio. This section describes the steps necessary to use Attunity Studio to configure your system to work with data access.

Configure the Machines


You must configure the machines used in the system. Make sure to configure the machine where your backend data source and data source driver reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio.

Configure User Profiles


You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio.

Configure the Binding


You add bindings to Attunity Studio before you can add the data source. A Binding Configuration always exists on a server machine where the data sources to be accessed reside. A binding configuration can also be defined on a client machine to point to data sources on a server machine.

Configure the Data Sources in the Binding


You add Data Sources to a binding in Attunity Studio. After you add the data source, you define the connection and configure the data sources environment properties. For a description of the environment properties for each data source, see the section for the data source you are working with. You may need to create a data source shortcut. A data source shortcut is a definition of a data source on a machine that refers to a data source in the binding of a remote machine. You need a data source shortcut when you access the procedure from a client machine using either ADO or the "fat" version of ODBC.

Set Up the Data Source Metadata


Non-relational data sources (excluding Adabas, when Predict metadata is used) require Attunity metadata. This Attunity metadata is stored as a data source definition in an Attunity data source repository, on the machine where the driver is defined. AIS metadata lets you access the data from a non-relational database with SQL commands. For data sources that require AIS metadata, the metadata is imported and maintained using the Attunity Studio Design perspective Metadata tab. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. For more information, see Managing Metadata.

Implementing a Data Access Solution 12-3

Supported Interfaces
AIS Data Sources provide universal connectivity in many standard interfaces. In addition, Attunity also provides generic drivers for various interfaces, such as ODBC and JDBC. Attunitys universal connectivity uses Standard ANSI 92 SQL to query any supported data source, whether relational or non-relational. Specific versions of SQL for relational data sources can be used by making the SQL query directly to the data source. The following interfaces are supported by AIS:

JDBC Client Interface: A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface: The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ODBC Client Interface Under CICS (z/OS Only): The ODBC interface on a mainframe is access through either a COBOL or C program running under CICS. On the z/OS machine you do not need to run the daemon if the daemon is running on the target machine. OLE DB (ADO) Client Interface: An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all supported data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms.
Note:

The Windows Attunity Server kit is required to use the ADO client interface.

NET Client Interface: ADO.NET is the data-access component of the Microsoft .NET Framework. AIS supports all ADO.NET objects, methods and properties as well as additional.NET Data Provider classes for specific use with AIS.

12-4 AIS User Guide and Reference

Data Access Flow


When you use the data access solution, you connect directly to a data source with a driver. The connection is done within one of the Supported Interfaces. Information is accessed through standard Attunity metadata from the data source driver using standard SQL syntax.(for more information see Attunity SQL Syntax). The following example shows the Data Access solution flow.
Figure 121 Data Access Flow

Data Source Metadata


Metadata defines the structure of the data and where it is located. AIS converts the metadata to an XML structure that you can view in Attunity Studio. When you configure the data source driver you must import the metadata from the original data source. AIS uses the metadata from relational databases, such as Oracle or DB2. For most non-relational data sources or data sources which do not have metadata, AIS requires its own metadata. This allows AIS to structure the data for these data sources into a table structure similar to a relational database. This lets you use SQL requests to access data from these data sources. For more information on metadata, see Managing Metadata. The following is an example of the metadata in a standard AIS data source adapter.
Example 121 Data Source Metadata

<?xml version="1.0" encoding="UTF-8"?> <table name="customer" datasource="___NAVDEMO" description="" fileName="C:\Program Files\Attunity\Server\demo\customer" nBlocks="0" nRows="0" bookmarkSize="4" organization="index"> <fields> <field name="c_custkey" datatype="int4"/> <field name="c_name" datatype="cstring" size="25"/>

Implementing a Data Access Solution 12-5

<field name="c_address" datatype="cstring" size="40"/> <field name="c_nationkey" datatype="int4"/> <field name="c_phone" datatype="string" size="15"/> <field name="c_acctbal" datatype="double"/> <field name="c_mktsegment" datatype="string" size="10"/> <field name="c_comment" datatype="cstring" size="117" nullable="true"/> </fields> <keys> <key name="cindex" size="4"> <segments> <segment name="c_custkey"/> </segments> </key> </keys> </table>

Configuring the System for Data Access Using XML


You can use the Update XML, Insert XML, and Select XML commands when using a data access solution for a non-relational data source. These commands are supported on the following data sources:

Adabas C Data Source Enscribe Data Source (HP NonStop Only) VSAM Data Source (z/OS) CISAM /DISAM Data Source

Set up the Environment in Attunity Studio


You set up a machine, binding, and user profiles should be created in Attunity Studio as described in Configuring the System for Data Access (Using Studio).

Expose the XML Field


In the binding that you added, you must configure the binding properties to expose the XML fields. To expose the XML field 1. Right-click the binding you are working with and select Edit Binding.
2. 3. 4. 5.

In Binding editor, click the Properties tab (at the bottom of the editor). In the Properties tab, expand Misc. Find the exposeXMLField property, click in the Value column and select true. Save and refresh the binding.
Note:

The binding must be on the server where AIS is installed.

See also:

Environment Properties

12-6 AIS User Guide and Reference

Set up the Data Source and Import the Metadata


Add the data source you are working with to Attunity Studio and import the metadata as described in Configure the Data Sources in the Binding and Set Up the Data Source Metadata.

Prepare an Input XML Structure


You must prepare an XML string that is related to the metadata record. For an SQL interface, such as ADO .NET, the application must refer to a field called xml. This field is available when you Expose the XML Field. The xml filed is used as a parameter field that receives a value that is set to your XML string. The following is an example of the required structure:
command.CommandText = "insert into adabas:TABLE1(xml) values (?)"; param.Value = "<TABLE1 SMOD='ABC123' USER='USER4' >" + "<MU1>34</MU1>" + "<MU1>35</MU1>" + "<MU1>36</MU1>" + "<MU2>A</MU2>" + "<MU2>B</MU2>" + "<MU2>Z</MU2>" + "</TABLE1>";

Setting up the Query and Database Adapter for XML Operations


You can use the Query or Database adapter to carry out XML operations. The XML operations that are supported are:

Update XML Insert XML Select XML

Using the Query Adapter for XML Operations


Use the Query adapter to access data using the XML interface. The following example shows how to use the Update XML operation. In this case use the Query adapters Update interface with the following syntax:
Example 122 Update XML

<update> <sql>update adabas:table1 set xml=? where ISN='13'</sql> <inputParameter> <TABLE1 INT="400"> <PE1 FLOAT4="23.67" FIXED2="1500" PACKED_DEC="898900" character="XYZ"/> <PE1 FLOAT4="24.67" FIXED2="600" PACKED_DEC="898900"/> <PACKED_DEC_ARRAY>409</PACKED_DEC_ARRAY> <PACKED_DEC_ARRAY>299</PACKED_DEC_ARRAY> </TABLE1> </inputParameter> </update>

Implementing a Data Access Solution 12-7

The Input XML Record


You can use the Query or Database adapter to receive a record in XML format that represents the data you requested using one of the valid XML operations. The following is the input that is required to receive the record:
<query metadata="true" outputFormat="xml"> select xml from adabas:table1 limit to 1 rows </query>

Notes:

Using the Attunity XML Utility lets you enter the input and receive the record. The recommended method for executing an XML operation is to make the select operation (for example a select Update operation), then update the output XML and then set the operation. If subfields are defined in the metadata with one of the XML operations, the subfields must match their original field or the subfield will have a non-valid offset.

12-8 AIS User Guide and Reference

13
Setting up Data Sources and Events with Attunity Studio
This section contains the following topics:

Data Sources Events

Data Sources
Data Sources are added to Bindings in Attunity Studio. A data source driver is used to connect directly to a various data sources, from JDBC, ODBC applications on all platforms, and from ADO and.NET applications on Windows. You use data sources when Implementing a Data Access Solution in Attunity Connect. The following sections describe how to set up data sources in your system.

Adding Data Sources


To use a Data Source, you need to add it to the system and configure the data source properties. The data source is defined in the Design perspective Configuration view in Attunity Studio. To define the connection 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Configuration View, expand the Machine folder and then expand the Machine where you want to add the data source. Open the Bindings folder for the machine where you want to add the data source. Expand the Binding with the data source you are working with. Right-click the Data source folder and select New Data Source. The New Data Source wizard opens.

6.

Enter the following information in this window.


Name: Enter a name to identify the data source. Type: Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.

7.

Select the data source type from the Type list. Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.

Setting up Data Sources and Events with Attunity Studio

13-1

8.

Click Next. The Data Source Connect String page opens.

9.

Enter the information requested in this step. The information required depends on the type of data source selected. See the Data Source Reference for more information on how to configure each data source.

10. Click Finish.

Note:

The above procedure is also used to add a procedure data source. To add a procedure, select a procedure driver as your data source driver. For more information, see Configuring the Procedure Data Source.

1.

To configure a data source Right-click the data source you are working with and select Open. The Data Source configuration opens in the editor.

2. 3.

Click the Configuration tab. You can make changes to the data source configuration as required. This tab can have three sections:

Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. Authentication: This lets you edit the authentication information for data sources that require this. The following data sources have the Authentication section: Adabas (ADD) DB2 (all types) Ingres ODBC OLEDB (all types) Oracle Oracle RDB SQL Server Sybase VSAM (CICS)

Properties: This section lets you change the default values for the configuration properties for your data source.

For information on how to enter the configuration information, see the information for your data source in the Data Source Reference.

Configuring Data Source Advanced Properties


You configure the advanced properties for a Data Source in the Advanced tab of the data source editor. The advanced settings are the same for every data source. Advanced settings let you do the following:

13-2 AIS User Guide and Reference

Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy

1. 2. 3. 4.

To configure data source advanced properties Open Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description

Table 131 Field Properties

Transaction type

The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level. A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

Syntax name

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner

The name of the table owner that is used if an owner is not indicated in the SQL

Setting up Data Sources and Events with Attunity Studio

13-3

Table 131 (Cont.) Data Source Advanced Configuration Field Read/Write information Description Select one of the following:

Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.

Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.

Virtual View Policy Generate sequential view Select this to map a non-relation file to a single table. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this if you want to have an individual table created for every array in the non-relational file. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this to include a column that specifies the row number in the virtual or sequential view. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this for virtual views to include all the columns in the parent record. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.

Generate virtual views

Include row number column

All parent columns

Testing a Data Source


You can run a test procedure from Attunity Studio to determine whether the data source connection is active. This procedure pings the server and returns the result in a Test screen. To test a data source 1. Right-click the data source and select Test.
2. 3.

Select the active workspace from the drop-down list. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, and error message that describes the problem is displayed.

Creating a Data Source Shortcut (Optional)


A Data Source shortcut points to the location of a data source that is defined in a binding on another machine. You need a data source shortcut when you access the procedure from a client machine using either OLE DB (ADO) Client Interface or the
13-4 AIS User Guide and Reference

"fat" version of ODBC Client Interface (when the server installation is used as a client).1 To create a data source shortcut Drag-and-drop a data source from the Machine where the data source is defined to the machine where you want the shortcut. Or do the following:
1. 2. 3.

In the Design Perspective Configuration View, expand the machine where the data source shortcut should be defined. Expand the Bindings folder and then expand the binding where you want to add the data source. Right-click Data Sources and select New Data Source Shortcut. The New Data Source Shortcut wizard opens.

Figure 131 New Data Source Shortcut Wizard

4.

Select the machine where the target data source is defined:

If the machine is defined in Attunity Studio, select Machine from Configuration view and select the machine with the target data from the drop-down list. If the machine is defined in the binding as a remote machine (for more information see, Setting up Machines), select Machine defined in current binding remote machines list and select the machine from the list. If you want to also add the machine where the data source resides to the list of machines in the Configuration view select New machine. After clicking Next, the Add Machine screen opens where you can add the machine (see Setting up Machines).

5.

Click Next. The machine access information screen opens.

A thin ODBC client is available that does not require a data source shortcut defined on the client machine.

Setting up Data Sources and Events with Attunity Studio

13-5

Figure 132 Machine Access Screen

6.

Enter the following information in this screen.

Physical Address: The physical address of the machine with the data source. You cannot edit this field. Alias in binding: The alias given to the machine for the binding. Port: The machines port number. Workspace: Select the workspace that is used to access the data source.

7.

Click Next. The runtime security information screen opens.

13-6 AIS User Guide and Reference

Figure 133 Runtime Security Information Screen

8.

Add the following information about the machine security in this screen.

User name: Enter the user name for a user that has rights to access this data source on this machine. Password: Enter the users password. Confirm password: Re-enter the users password to confirm. Encryption protocol: Select the encryption protocol (if any) used for this protocol. For information on encryption in AIS, see Encrypting Network Communications. Firewall protocol: Enter the firewall protocol (if any) used for the machine with the data source. For information on using a firewall in AIS, see Accessing a Server through a Firewall.

This user and password information is written in the user profile associated with the binding (the user profile with the same name as the binding name). For details about the user profiles, see Managing a User Profile in Attunity Studio.
9.

Click Next. The list of available data sources opens. Select the data source for which you are creating a shortcut and click Finish. source name is used as a data source in the binding on the client machine. The data source shortcut is displayed in the Configuration View.

10. Select the Alias in binding check box and provide an alias for the name if the data

Testing Data Source Shortcuts (Optional)


You can run a test procedure from Attunity Studio to determine whether the data source shortcut is active. This procedure pings the server and returns the result in a Test screen. To test a data source shortcut 1. Right-click the data source shortcut and select Test.
Setting up Data Sources and Events with Attunity Studio 13-7

The active workspace is displayed in a read only field. The test procedure pings the server with the selected workspace when the test is executed.
2.

Click Next The next screen in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.

Events
Attunity Studio uses event queues to handle Events. TheEvent Queue is defined as an adapter in Attunity Studio where interactions are expected as events.

Adding Event Queues


The Event Queue adapter is defined in a binding using Attunity Studio, in the Design perspective Configuration View. To define an event queue 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective Configuration view, expand the Machine folder and then expand the machine where you want to add the event. Expand the Bindings folder and then the binding where you want to add the event. Right-click the Events folder and select New event. The New Event editor opens. Enter a name for the event queue.
Notes:

The name must be less than 32 characters. You cannot use the word event in the event name.

6.

Select the type of queue that you are using to handle events.

Event Queue: This will handle events using a regular event queue adapter. Tuxedo Queue: This will handle events with a Tuxedo Queue adapter. CICS Queue: This will handle events with a CICS Queue adapter (for z/OS systems only).

7.

Click Finish. A message appears asking if you want to change the daemon configuration. Click OK to accept the changes made in this procedure, or cancel to close and not confirm the changes. A new editor opens in the Editor section of the Studio workbench. The name of the editor is the name of the binding where you added the event queue.

After you finish adding the event queue to the Studio, follow these steps for making changes to the event queues adapter properties. To edit the event queues adapter properties In the Design perspective Configuration view, expand the Machine folder and then expand the machine with the event, expand the Events folder, then right-click

1.

13-8 AIS User Guide and Reference

the event and select Open. The Adapter configuration properties open in the editor
Note:
2.

If the event editor is open, go to the next step.

The following properties are available for the adapter queue:


addExecuteTimestamp: Do not change the setting for this property. remoteExecuteCompress: Do not change the setting for this property. remoteExecuteUrl: Do not change the setting for this property. routers: A list of users who can send receive an event. If the owner of the target is not one of these users, the event is not routed to the user. To add routers, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this router. senders: A list of users who can send an event. If the owner of the event is not one of these users, the event is ignored. To add senders, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this sender.

Defining Metadata for Event Queues


You use Metadata to define the Interactions for the Event and the structures of records used by the interactions. The metadata is stored as an application adapter definition in the SYS Repository, on the machine where the adapter is defined. To define event queue adapter metadata 1. In the Design perspective Configuration view, right-click the event queue adapter where you want to edit the metadata and select Show Metadata View. The Metadata view becomes active with the event queue adapter displayed in the Metadata view.
2.

Expand the event queue adapter, then right-click Imports and select New Import. The Metadata import screen opens in the editor.

3. 4.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select one of the following from the drop-down list:

Event Queue Import Manager Using COBOL Copybook Files Event Queue Import Manager Using Tuxedo VIEW/FML Files as the import type

5. 6.

Click Finish. The Metadata Import wizard opens. Click Add in the import wizard to add COBOL copybooks or BEA Tuxedo VIEW/FML files. The Add Resource window opens, which lets you select files from the local machine or get them from an FTP site. If the files are on another machine, you can add the FTP site.

Setting up Data Sources and Events with Attunity Studio

13-9

Figure 134 Add Resource

To add an FTP site:


Right-click My FTP Sites and select Add. In the Add FTP site screen, enter the server name where the COBOL copybooks reside. If you are not using an anonymous access, enter a valid username and password. You can browse and transfer files required to generate the metadata. Access the machine using the username as the root directory (high-level qualifier on z/OS systems).
Note:

After you access a machine you can right-click the machine and select Change Root Directory to change the high-level qualifier.

7.

From the Add Resource screen, select the files to transfer and click Finish.
Note:

You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. Each COBOL copybook format must be the same. Therefore, if you try to import a COBOL copybook that uses the first six columns with a COBOL copybook that ignores the first six columns, you must repeat the import.

The selected files are displayed in the wizard.

13-10 AIS User Guide and Reference

Figure 135 Input Files

8.

Click Next. The Apply Filters editor opens. Use this editor to apply filters, if necessary.
Note:

You can only use filters if using COBOL. If you are using Tuxedo VIEW/FML files, go to the next step.

Figure 136 Apply Filters

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type.
Setting up Data Sources and Events with Attunity Studio 13-11

Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP is for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested columns: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Indicates whether to consider case sensitivity. Find: Searches for the selected value. Replace with: Replaces the value that is entered (in the Find field) with the value with another selected value.
9.

Click Next to open the Add Events editor.

Figure 137 Add Events

10. Click Add to add events. You can change the default name that is specified for the

event. The event mode is async-send, which means that the event queue waits for an input. You specify an input record used by the program associated with the event from the drop-down list in the Input column. This list is generated from the input files specified at the beginning of the procedure. Select a relevant record for the event.
11. Add as many events as necessary and click Next.

13-12 AIS User Guide and Reference

12. Click Next twice to open the Import Metadata screen. In this screen you generate

the metadata. You can import the metadata to the mainframe machine or leave the generated metadata on the Attunity Studio machine, to be imported later.
Figure 138 Import Metadata

13. In the Deploy Metadata section, select Yes if you want to transfer the metadata to

the server where the application adapter is defined.


14. Click Finish.

Note:

After you import the metadata, you can view the metadata in the Metadata tab. You can also make any adjustments to the metadata and maintain it, as necessary. For more information, see Working with Metadata in Attunity Studio.

Setting up Data Sources and Events with Attunity Studio

13-13

13-14 AIS User Guide and Reference

14
Procedure Data Sources
This section includes the following topics:

Procedure Data Sources Overview Configuring the Procedure Data Source

Procedure Data Sources Overview


A procedure driver enables connecting to a supported Attunity Connect procedure from JDBC, ODBC applications on all platforms and from ADO and .NET applications on Windows platforms. A procedure is a user-written DLL, which is viewed by AIS as a data source that returns a single rowset. A procedure enables the use of SQL to query the application. A procedure driver enables connecting to a supported AIS procedure, using Attunity SQL. Drivers are currently provided for the procedures listed in the following table:
Table 141 Driver Name CICS Procedure Data Source Natural/CICS Procedure Data Source (z/OS) Procedure Data Source (Application Connector) Available Procedures Procedure A program via a CICS EXCI transaction. A NATURAL program via CICS EXCI transaction. Any legacy application. Compare with Legacy Plug Application Adapter to access the application using JCA, XML, COM or.NET front-end applications.

Configuring the Procedure Data Source


You configure a procedure to Attunity:

Adding Procedure Data Sources: The procedure needs to be listed in a binding configuration. The definition in the binding includes properties, set in the procedure editor Properties tab. Each procedure has its own set of specific properties. Defining a Shortcut to a Procedure on Another Machine: A procedure driver requires a metadata definition that describes the inputs and outputs for the procedure.

Procedure Data Sources 14-1

Adding Procedure Data Sources


To use a procedure data source, you need to add it to the system and configure the data source properties. The data source is defined in the Design perspective, Configuration view of Attunity Studio. or you can also use the following methods:

Using the NAV_UTIL EDIT command. Using the NAV_UTIL UPD_DS command.

To add a Procedure data source 1. Open Attunity Studio.


2. 3. 4. 5. 6. 7. 8.

In the Design perspective, Configuration view expand the machine where you want to add the procedure data source. Expand the Bindings folder, and then expand the required binding configuration. Right-click Data sources and select New Data source. Enter a name for the procedure in the Name field. Select the procedure type from the Type list. Click Next. Enter the connect string to access the procedure. For information on how to enter the connect string, see the information for the procedure data source that you are using in the Procedure Data Source Reference. Click Finish.

9.

The procedure data source is displayed in the Configuration view and the procedure editor is displayed. You can set or change the properties for procedures in the following ways:

Using the Attunity Studio Design perspective Configuration view. Using the sp_config_datasource stored procedure as follows:
nav_proc:sp_config_datasource(ds_name, <config attribute="value"/>)

You change an environment property by executing a statement with the following format:
call nav_proc:sp_config_environment('proc_name','<config att="value"/>')

Where proc_name is the name of the procedure in the binding configuration. To edit a procedure data source definition Open Attunity Studio. In the Design perspective Configuration view, expand the machine with the procedure data source you are using. Expand the Binding folder and then expand the binding with your procedure data source. Expand the Data sources folder. Right-click your procedure data source and select Open. Click the Configuration tab. This tab should be open by default.

1. 2. 3. 4. 5.

14-2 AIS User Guide and Reference

6.

Make changes to any of the sections in this tab, if necessary. This tab can have any of the following sections.

Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. The Natural/CICS Procedure Data Source (z/OS) and the CICS Procedure Data Source require configuration in the Configuration section. Authentication: This lets you edit the authentication information for data sources that require this. The Procedure Data Source (Application Connector). data source requires configuration in the Authentication section: Properties: This section lets you change the default values for the configuration properties for your data source. All of the procedure data sources contain properties to be configured. For information about the properties for each data source, find the data source in the Procedure Data Source Reference.

7.

Click Save.

Configuring Data Source Advanced Properties


You configure the advanced properties for a Data Source in the Advanced tab of the data source editor. The advanced settings are the same for every data source. Advanced settings let you do the following:

Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy

1. 2. 3. 4.

To configure data source advanced properties "Open" Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description

Table 142 Field Properties

Transaction type

The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level.

Procedure Data Sources 14-3

Table 142 (Cont.) Data Source Advanced Configuration Field Syntax name Description A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner Read/Write information

The name of the table owner that is used if an owner is not indicated in the SQL Select one of the following:

Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.

Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.

Virtual View Policy Generate sequential view Generate virtual views Include row number column All parent columns Select this to generate virtual views for arrays when importing metadata.

14-4 AIS User Guide and Reference

Defining a Shortcut to a Procedure on Another Machine


A shortcut is a definition of a procedure on a machine, where the definition points to the location of the procedure that is defined in a binding on another machine. You need a data source shortcut when you access the procedure from a client machine using either ADO or the "fat" version of ODBC (when the server installation is used as a client).
Note:

A thin ODBC client is available that does not require a data source shortcut defined on the client machine.

To create a procedure shortcut You can drag-and-drop a data source from the machine where the data source is defined to the machine where you want the shortcut. or do the following:
1. 2. 3. 4.

Open Attunity Studio. In the Design perspective Configuration view, expand the machine where you want to create the shortcut. Expand the Bindings folder and the binding where you want to add the procedure shortcut. Right-click the Data sources folder and select New Data Source Shortcut. The New Data source shortcut wizard opens, as shown in the following figure:

Figure 141 The New data source shortcut screen

5.

Do one of the following:

Select the machine where the target procedure is defined. If that machine is defined in Attunity Studio, then select Machine from Configuration, and then select the machine from the list. Click Next to open the Add runtime machine access information page. If the machine was defined in the binding as a remote machine, then select Machine defined in current binding remote machines list, and then select

Procedure Data Sources 14-5

the machine from the drop-down list. Click Next to open the Add runtime machine access information page.

If you want to also add the machine where the procedure resides to the list of machines in the tree, select New machine-add the new machine. Click Next, the Add Machine dialog opens to add the machine as described in Setting up Machines. After you add the new machine information, click Next to open the Add runtime machine access information page.

6.

In the Add runtime machine access information page, you can change the following information, if necessary:

Alias in binding: Change the name of the shortcut. Port: Change the port where the original data source is located. Workspace: Change the workspace for the original data source.

7.

Click Next. The Add runtime security information page is displayed. Add security information, if necessary (User name, Password, Encryption protocol and Firewall protocol).
Note:

This information is written to the user profile, associated with the binding. For more information, see Managing a User Profile in Attunity Studio.

8.

Click Next. The list of available data sources, including procedures, is displayed. Select the required procedure, and click Finish.
Note:

If the procedure name is used as a procedure or data source in the binding on the client machine, check the Alias in binding box and provide an alias for the name.

The shortcut is now displayed in the Configuration view.

Defining the Procedure Metadata


Metadata defines the inputs and outputs for the procedure. The metadata is stored as a data source definition in an Attunity procedure repository, on the machine where the driver is defined. All metadata is either imported from relevant input files (such as COBOL copybooks) or created and maintained using the Attunity Studio Design perspective Metadata tab.
Note: Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not reflected in the Attunity metadata.

14-6 AIS User Guide and Reference

15
Implementing an Application Access Solution
This section contains the following:

Overview Setting up AIS for Application Access Supported APIs Application Access Flow Defining the Application Adapter ACX Protocol Transaction Support Generic and Custom Adapters

Overview
An application access solution lets you connect between 3GL applications directly. An application context is any enterprise software component where interactions occur within a transaction context. Applications include:

Internally developed applications Enterprise packaged products, such as ERP or CRM products Database systems, such as relational databases or file systems

AIS uses the Attunity Applications Adapter Framework (AAF) to accomplish this. In this framework, an application adapter that directly connects to an application is used instead of a data source. The following application adapters are currently available.
Table 151 Application Adapters Application A program via a CICS EXCI transaction Simple COM-based applications A program via an IMS/TM transaction

Adapter Name CICS Application Adapter (z/OS Only) COM Adapter (Windows Only) IMS/TM Adapter (z/OS Only)

Implementing an Application Access Solution 15-1

Table 151 (Cont.) Application Adapters Adapter Name Legacy Plug Application Adapter Pathway Application Adapter (HP NonStop Only) Tuxedo Application Adapter (UNIX and Windows Only) Application Any legacy application A program via a Pathway transaction

A program via a Tuxedo service

Note:

If the application is not directly supported by one of these adapters, Attunity Connect includes an SDK that enables you to write an adapter for the specific application. For details, see the Attunity Connect Developer SDK.

Setting up AIS for Application Access


This section describes how to prepare your system for an application access solution. To set up your system, follow these steps. To set up an Application Access solution 1. Install the System Components in the proper locations. See Installing System Components for an explanation on what to install and the locations for installing the components.
2.

Configure the system for application access. Make the configurations in Attunity Studio. For Application Access, select an adapter to use for you integration. See Configuring the System for Application Access (Using Studio) for a detailed explanation on what to configure.

Installing System Components


To use AIS for an application access solution, install the following:

AIS on the backend. The backend of the system is where your data is stored. In an application access solution, you access an application, such as CICS, directly to get the necessary data. This is where your application adapter resides. You must install the full (thick) version of AIS on the backend. For information on how to install AIS, see the Installation Guide for the platform you are working with. AIS for the Application (Thin Client): The application (XML, JCA, COM or .NET) that you are working with is installed with an AIS thin client. For information on how to install the AIS thin client, see the installation guide for the platform you are working on. Attunity Studio: Attunity Studio can be installed on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide.

15-2 AIS User Guide and Reference

Configuring the System for Application Access (Using Studio)


You must configure the following system components to set up you system for application access:

Machines: You must configure the machines used in the system. Make sure to configure the machine where your backend application and application adapter reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio. User Profiles: You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio. Application adapters: This is what makes this solution the application access solution. To set up your adapter, you must: Prepare to use the adapter by Setting Up Adapters to Attunity Studio. Create the application adapter interactions. You do this by Configuring Application Adapters that you are using for your solution. Make sure that your adapter is responding correctly by carrying out the procedures in Testing Application Adapters. Define the adapter metadata. Adapter metadata defines the interactions for the application adapter and the schema of any input and output records used by the interactions. The metadata is stored as an application adapter definition in the SYS repository, on the machine where the adapter is defined. See Working with Application Adapter Metadata. All metadata is imported from relevant input files (such as COBOL copybooks) and maintained using the Attunity Studio Design perspective Metadata tab. For more information on setting up application adapters, see Setting Up Adapters and the Adapters Reference section for the adapter you are using.

Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not indicated in the Attunity metadata.

Supported APIs
AIS application adapters connect to supported applications using JCA, XML COM, or.NET APIs on Windows platforms.

The XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is works well for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all supported platforms. On other platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP. The JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards.

Implementing an Application Access Solution 15-3

It provides a robust, efficient, and reliable way to integrate Java applications with native applications.

The COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface : A .NET component, called NetACX, that enables access from any .NET-based application.

Application Access Flow


When you use adapters with AAF, you are connecting between applications. An application written with one of the Supported APIs connects to another application. Information is sent from the first application using an XML protocol called ACX (for more information, see ACX Protocol). An AIS application adapter that uses Attunity metadata is defined to talk to the second application. All required connections are sent directly in the ACX protocol, there is no need to use the query processor. The following diagram shows the Application Access solution flow.
Figure 151 Application Access Flow

Defining the Application Adapter


The central processing of application access is done by the application adapter. An adapter usually refers to the adapter binding, which is the same as a data source for SQL or data access solutions. The adapter type does the job of the data source driver in the application access world. An adapter type is compatible with a specific

15-4 AIS User Guide and Reference

application, such as the CICS adapter. You can also create an application to work with an application developed internally. All adapters must have an adapter or metadata definition. The definition is made up of a list of interactions and a schema. The user must enter the location for an XML schema that includes the metadata for the application. The schema contains all of the record structures without the interactions. The following figure shows an example of the adapter definition. In this case, this is the adapters metadata in XML format.
Figure 152 Example of an Adapter Definition

You define the adapter in Attunity Studio. Studio provides a graphical interface that allows you to define and import metadata definitions for adapters. For more information on how to define adapters in Attunity Studio, see Setting Up Adapters.

ACX Protocol
ACX (Attunity Connect XML Protocol) is Attunitys internal XML protocol. ACX is modeled after JCA (Sun Javas connection protocol). ACX is the wire that connects the applications and APIs with the target Application metadata definitions. ACX executes its tasks with connections between applications. There are two types of connections:

Transient connection: Transient connections are used with a single ACX request. A transient connection closes when an ACX request ends, or when the connection context changes (when a new verb is used to send a new request). Persistent connection: Persistent connections are used for an ongoing dialog with the back-end adapter or for pooling scenarios. Persistent connections are closed manually or when timed out. Persistent ACX connections are logical and do not rely on an active network connection (such as a connected socket) to stay active. This lets you use disconnected protocols such as HTTP as transports for ACX.

Implementing an Application Access Solution 15-5

The daemon keeps track of persistent connections with a unique connection ID. If you need to send a new ACX request at a later time, you can use a previous persistent connection by using its connection ID.
Note:

Support for two-phase-commits may require a physical connection.

The AIS XML Utility lets you make connections between applications using the ACX protocol. For more information see Using the Attunity XML Utility. For information on using ACX, see the Adapters Reference and the XML Client Interface chapter.

Transaction Support
Application adapters may support a two-phase commit capability to the extent that transactions are implemented in the application. A two-phase commit lets a database return to its pre-transaction state if an error occurs. This type of commit is used when more than one database or resource are updated in a single transaction. In a two-phase commit either all or none of the databases are committed. In a transaction using a two-phase commit, all changes are stored temporarily by each database. A transaction monitor issues a pre-commit command that requires an answer from each database. When all databases answer with a positive response, the final commit is issued. Applications with two-phase commit support: Attunity Connect supports the PrepareCommit and Recover API calls. Applications that support two-phase commit can participate fully in a distributed transaction. The following adapters support two-phase commit:

CICS Application Adapter (z/OS Only) IMS/TM Adapter (z/OS Only)


Note:

To work with two-phase commit with either of these application adapters, RRS (Transaction Management and Recoverable Resource Manager Services) must be installed. See Transaction Support.

See the reference for the specific adapter for any two-phase commit considerations.

Generic and Custom Adapters


You can use generic or custom adapters in your application access solution. Custom adapters are usually written for a set of legacy functions. The schema for a custom adapter is fixed and usually hard-coded into the adapter. In most cases backend interaction attributes are hard-coded into the adapter code with a special handler for each interaction. Generic adapters are written according to an API. The schema is provided at deployment can may vary according to the individual configurations. Backend interaction attributes are provided in the schema with a single handler for executing

15-6 AIS User Guide and Reference

interactions. Examples of generic adapters include CICS Application Adapter (z/OS Only), IMS/TM Adapter (z/OS Only), Pathway Application Adapter (HP NonStop Only), and Legacy Plug Application Adapter.

Developing an Application Adapter in AIS


In addition to the generic adapters provided with AIS, you can develop your own adapter using the Adapter SDK or GAP (Generic APplication). This API defines the interfaces for adding new application adapters to AIS. The adapters framework is designed to enable application integration using XML and the J2EE Java Connector Architecture (JCA) resource manager provided with AIS.

MyApp: An Application Adapter Example


The following are examples of an adapter with one function called Action1. The following is true about Action1:

It receives a string and an integer parameter It returns a string value of up to 512 characters and an integer value

If the above is true then the following figure shows the schema that is generated for this application.
Figure 153 Sample Schema for MyApp

Implementing an Application Access Solution 15-7

The following is an example of the generated code for the MyApp application:
Figure 154 Code Example for MyApp

For more information on writing application adapters with the GAP SDK, see the Attunity Connect Developer SDK.

15-8 AIS User Guide and Reference

16
Setting Up Adapters
This section contains the following topics:

Setting up Adapters Overview Working with Adapters

Setting up Adapters Overview


Adapters are used when you are using Application access. AIS provides a number of adapters to connect to applications. For more information, see Implementing an Application Access Solution. For a reference on the available adapters, see Adapters Reference.

Working with Adapters


Adapters are added to bindings in Attunity Studio. Adapters are used to access applications that you are integrating into your system. You use adapters when Implementing an Application Access Solution in Attunity Connect. These sections describe how to set up adapters in your system.

Adding Application Adapters Configuring Application Adapters Testing Application Adapters

Adding Application Adapters


To use an Application Adapter, you need to first add it to the system and configure the adapter properties. The adapter is defined in the Design perspective, Configuration view. To define the connection 1. Open Attunity Studio.
2.

In the Design Perspective Configuration View, expand the Machines folder and then expand the Machine where you want to add the adapter. If you need to add a machine to your Attunity Studio setup, see Setting up Machines. Expand the Bindings folder. Expand the binding where you want to add the adapter. Right-click the Adapters folder and select New Adapter. The New Adapter wizard opens.
Setting Up Adapters 16-1

3. 4. 5.

Figure 161 New Adapter Screen

6.

Enter the following information in this screen:


Name: Enter a name to identify the adapter. Type: Select the adapter type that you want to use from the list. The available adapters are described in the adapter reference section. Create event queue for the adapter: Select this check box if you want to associate an Event Queue with this adapter. For information on event queues, see Events.
Note:

you cannot use the word Event as part of the adapter name.

7.

Click Finish.

Configuring Application Adapters


This section describes how to use Attunity Studio to set up an adapter configuration. To configure the adapter 1. Open Attunity Studio.
2. 3. 4. 5.

Expand the Machines folder. Expand the Bindings folder, and then expand the binding with the adapter you are working with. Expand the Adapter folder. Right-click the adapter that you want to work with and select Open. The adapter configuration editor opens in the editor, which displays the properties for the adapter.

16-2 AIS User Guide and Reference

6.

Configure the adapter parameters as required. The configuration properties displayed depend on the type of adapter you are working with. For an explanation of these properties, see the Adapters Reference or the Non-Application Adapters Reference to find the documentation for your adapter.

Testing Application Adapters


You can run a test procedure from Attunity Studio to determine whether the adapter is connected. This procedure pings the server and returns the result in a Test screen. To test an adapter 1. Right-click the adapter and select Test.
2. 3.

Select the active workspace from the drop-down list. The test procedure pings the server with the selected workspace when the test is executed. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.

Setting Up Adapters 16-3

16-4 AIS User Guide and Reference

17
Application Adapter Definition
This section includes the following topics:

Overview The adapter Element The interaction Element The schema Element The enumeration Element The record Element The variant record Element The field Element

Overview
An application adapter definition includes the following information:

General Adapter Properties


This part defines various simple adapter properties such as name, type, description, time-out values, etc.

Interactions List
This part lists the Interactions offered by the adapter. Information items include the interaction name, its description and input and output record names.

Input and Output Record Schema


This part details the structure of all input and output records used by the adapter. The following diagram shows the Schemas of the adapter XML definition document:

Application Adapter Definition

17-1

Figure 171 Adapter Schema

The adapter Element


The adapter element is the root element of the adapter definition XML document.

The attributes of the adapter element define simple adapter properties (see below). The interaction elements under the adapter describe particular interactions. The schema element under the adapter provides the schema of all the records used within the adapter.

The following table summarizes the attributes of the adapter element:


Table 171 Attribute authenticationMechanism adapter Element Attributes Type Default Description

enum basic password The type of authentication implemented by the adapter, as follows:

none: The adapter does not handle authentication. basic password: The adapter implements basic username-password authentication.

Note: Kerberos authentication will be supported in future releases. connect connectionPoolingSize description maxActiveConnections maxIdleTimeout string int string int int 600 Adapter specific connect string. The number of connections that can be held in the connections pool simultaneously. The number of connections that can be held in the connections pool simultaneously. The maximum number of simultaneous connections an adapter may take (per process). The maximum time, in seconds, that an active connection can stay idle. After that time, the connection is soft-closed and placed in the connections pool or simply destroyed (depending on the pooling settings). The maximum size in bytes that an XML ACX request or reply may span. Larger messages are rejected with an error.

maxRequestSize

int

17-2 AIS User Guide and Reference

Table 171 (Cont.) adapter Element Attributes Attribute name Type string Default Description The name of the adapter definition. (This name is normally the name of the adapter specified in the binding configuration. If this name differs from the name in the binding configuration, the binding entry must include a definition element set to the name specified here.) The operating system the application adapter runs under. 120 The maximum amount of time (in seconds) that a connection is kept in the connections pool before it is destroyed. The name of the schema used to define the adapter. The level of transaction support. Your options are:

operatingSystem poolingTimeout

string int

schemaName transactionLevelSupport

string enum 1PC

0PC: No transaction support 1PC: Simple (single phase) transactions 2PC: Distributed (two phase) transactions

type Example 171

string The adapter Element

The name of the adapter executable.

<adapter name="calc" description="Attunity Connect Calc Schema" transactionLevelSupport="0PC" authenticationMechanism="basic-password" maxActiveConnections="0" maxIdleTimeout="600" maxRequestSize="32000">

The interaction Element


The interaction element describes a single adapter interaction. The interaction element is a child-element of the adapter element. The following table summarizes the attributes of the interaction element:
Table 172 Attribute description input interaction Element Attributes Type string string Default Description A description of the interaction. The name of the input record structure.

Application Adapter Definition

17-3

Table 172 (Cont.) interaction Element Attributes Attribute mode Type enum Default Description The interaction mode. Available modes are:

sync-send-receive: The interaction sends a request and expects to receive a response. sync-send: The interaction sends a request and does not expect to receive a response. sync-receive: The interaction expects to receive a response. async-send-receive: The interaction sends a request and expects to receive a response that are divorced from the current interaction. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.

name output Example 172

string string

The name of the interaction. The name of the output record structure.

The interaction Element

<interaction name="add" description="Add 2 numbers" mode="sync-send-receive" input="binput" output="output" /> <interaction name="display" description="Display msg in output stream" mode="sync-send-receive" input="inpmsg" output="outmsg" />

The schema Element


The schema element describes the structures used in the interactions. The schema element is a child-element of the adapter element. Only one schema is allowed per adapter. The following table summarizes the attributes of the schema element:
Table 173 Attribute header initialization header name noAlignment version Example 173 schema Element Attributes Type string string string boolean string The name of the adapter. Determines whether buffers are aligned or not. The schema version. Description The include file generated by NAV_UTIL PROTOGEN (described in Attunity Developer SDK).

The schema Element

<schema name="calc" version="1.0" header="calcdefs.h">

17-4 AIS User Guide and Reference

The enumeration Element


The enumeration element defines an enumeration type for use in interaction definitions.

The record Element


The record element defines a grouping of fields. The following table summarizes the attributes of the record element:
Table 174 Attribute EntryRef IID libIID record Element Attributes Type string string string Description (Used with the COM adapter) The name of the method to be invoked within that object. (Used with the COM adapter) The UUID of a user defined type. Used for user defined data types only. (Used with the COM adapter) The UUID of the library in which the user defined data type is defined. Used for user defined data types only. Determines whether buffers are aligned or not. The name of the record. (Used with the COM adapter) Either a ProgID or a UUID of the COM that this input record refers to. (Used with the COM adapter) The number of parameters passed to that method. (Used with the cics adapter) The name of the program to be executed in a CICS transaction. (Used with the IMSTM adapter) The IMS/TM transaction name. (Used with the CICS adapter) The CICS transaction id where the program will run. The TRANSID must be EXCI or a copy of this transaction.

noAlignment name ObjectRef ParamCount program (z/OS only) transaction (z/OS only) transid (z/OS only)

boolean string string int string string string

Example 174

The record Element

<record name="binput"> <field name="p1" type="int" /> <field name="p2" type="int" /> </record> <record name="output"> <field name="result" type="int" /> </record> <record name="inpmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record> <record name="outmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record>

Application Adapter Definition

17-5

Defining Hierarchies
The variant record element can be used to define a hierarchical structure. The hierarchical definition includes a record definition for the child. The parent record includes a field record with the type name that is used to define the child.
Example 175 Variant Record

<record name="parent"> <field name="f1" type="child" /> <field name="f2" type="int" /> </record> <record name="child"> <field name="c1" type="int" /> <field name="c2" type="string" nativeType="string" length="20" /> <field name="c3" type="string" nativeType="string" length="20" /> </record>

The XML used to access the adapter, must use the same structure as specified in the interaction definition.

The variant record Element


Variants are similar to redefine constructs in COBOL, and to union in C. The basic concept is that the same physical area in the buffer is mapped several times. The mappings can be of:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variant:

Variant without Selector Variant with Selector

Variant without Selector


Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data. At this time, only the first case of a variant without a selector appears in the XML. It is therefore highly recommended to remove all unnecessary cases.
Example 176 Variant without a Selector

<record name="VAR1"> <field name="VAR_0" type="VAR1__VAR_0" /> </record> <variant name="VAR1__VAR_0"> <field name="UNNAMED_CASE_1" type="VAR1__VAR_0__UNNAMED_CASE_1" /> <field name="PARTCD" type="VAR1__VAR_0__PARTCD" /> </variant> <record name="VAR1__VAR_0__UNNAMED_CASE_1">| <field name="PARTNUM" type="string"

17-6 AIS User Guide and Reference

nativeType="string" size="10" /> </record> <record name="VAR1__VAR_0__PARTCD"> <field name="DEPTCODE" type="string" nativeType="string" size="2" /> <field name="SUPPLYCODE" type="string" nativeType="string" size="3" /> <field name="PARTCODE" type="string" nativeType="string" size="5" /> </record>

Variant with Selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.
Example 177 Variant with a Selector

<record name="ORDER"> <field name="RECTYPE" type="string" nativeType="string" size="1" /> <field name="VAR_1" type="ORDER__VAR_1" /> </record> <variant name="ORDER__VAR_1" selector="RECTYPE"> <field name="ORDER_HEADER" type="ORDER__VAR_1__ORDER_HEADER" case="H"/> <field name="ORDER_DETAILS" type="ORDER__VAR_1__ORDER_DETAILS" case="D"/> </variant> <record name="ORDER__VAR_1__ORDER_HEADER"> <field name="ORDER_DATE" type="string" nativeType="numstr_u" size="8" /> <field name="CUST_ID" type="string" nativeType="numstr_u" size="9" /> </record> <record name="ORDER__VAR_1__ORDER_DETAILS"> <field name="PART_NO" type="string" nativeType="numstr_u" size="9" /> <field name="QUANTITY" type="int" nativeType="uint4" size="4" /> </record>

The field Element


The field element defines single data item within a record or a variant. The following table summarizes the attributes of the variant element:
Table 175 Attribute array field Element Attributes Type array Description The array field that is made up of other fields.

Application Adapter Definition

17-7

Table 175 (Cont.) field Element Attributes Attribute COMtype Type enum Description (Used with the COM adapter) Specifies the field's data type as recognized by COM, using explicit COM enumeration values (for details, see COM Adapter (Windows Only). Runtime value holding the actual number occurrences in the array. A field can be specified as a counter field. The default value for the field. The default value for an integer is zero (0) and for a string NULL. Specifying a default value means that the field can be omitted from the input XML. Note: If a field isnt nullable, when using the database adapter and a default value is not supplied, an error occurs. filter length int Filtering of extraneous, unwanted metadata. This attribute is for internal use only. The size of the field including a null terminator, when the data type supports null termination (such as the cstring data type). (Used with the LegacyPlug adapter) The method by which the field is passed or received by the procedure (either byValue or byReference). When a parameter is used for both input and output, the mechanism must be the same for both the input and the output. For outer-level (non-nested) parameters, structure parameters (for the structure itself, and not structure members), and variant parameters, the default value is byReference. name nativeType string string The name of the field. The Attunity Connect data type for the field. Refer to Managing Metadata for a list of all supported data types. Note: When the type value is string, the nativeType value must also be specified as string. offset paramnum int int An absolute offset for the field in a record. (Used with the LegacyPlug adapter) The procedure argument number. 0 indicates the value is a return value. 1 indicates the value is the first argument, and so on. If paramNum is specified at the record level, it cannot be specified for any of the record members (at the field level). precision private reference required int boolean boolean boolean The float data type precision. Used in conjunction with scale (see below). The value is hidden in the response. Used with array (see above), to identify a pointer. A value is mandatory for this field.

counter

int

defult

string

mechanism

string

17-8 AIS User Guide and Reference

Table 175 (Cont.) field Element Attributes Attribute scale size type Type int int string Description The float data type scale. Used in conjunction with precision (see above). The size of the field. The data type of the field. The following are valid:

Binary Boolean Byte Date Double Enum Float Int Long Numeric[(p[,s])] Short String (When the type value is string, the nativeType value must also be specified as string) Time Timestamp

usage

string

(Used with the COM adapter) Explains what the COM adapter is about to do with this field:

InstanceTag: Names an object instance. Property: Handled as a property. Parameter: The field value should be passed as a parameter to/from a method. RetVal: The field will hold a methods return value.

value

string

Internal code representing the field, used in the C program.

Example 178

field Element

<field name="m" type="string" nativeType="string" length="512" private="true" />

Application Adapter Definition

17-9

17-10 AIS User Guide and Reference

Part III
Attunity Stream
This part contains the following topics:

What is the Attunity Stream CDC Solution Implementing a Change Data Capture Solution SQL-Based CDC Methodologies Creating a CDC with the Solution Perspective

18
What is the Attunity Stream CDC Solution
This section includes the following topics:

CDC Solution Overview The Attunity Stream CDC Architecture What Can Be Captured?

CDC Solution Overview


Attunity Stream captures and delivers the changes made to enterprise data sources in real-time. This enables you to move mainframe and enterprise operational data in real-time to data warehouses and data marts, improving the efficiency of ETL processes, synchronizing data sources, and enabling event-driven business activity monitoring and processing. Attunity Stream provides agents that non-intrusively monitor and capture changes to mainframe and enterprise data sources. Changes are delivered in real-time or consumed as required using standard interfaces. The Attunity Stream CDC solution provides the following capabilities:

Capture changes to data in real-time or near-real-time for applications that demand zero latency and require the most up-to-date data. Real-time data capture guarantees that a change event is immediately available at the consumer. Near-real-time data capture involves a configurable delay before a change is available at the consumer. Using a near-real-time configuration, when there is significant capture activity, events are reported immediately. However, after the system has been idle, it may take a few seconds for the events to start flowing again.

Enable consumers of changed data to receive changes quickly, either by asking for the changes in high-frequencies (such as every few seconds), or by sending them the changes as soon as they are identified. The consumer application periodically requests changes, receiving each time a batch of records that represent all the changes that were captured since the last request cycle. Change delivery requests can be done in low or high frequencies, for example, every 15 seconds or a set number of times a day. The extracted changes are exposed to enable the consumer application to seamlessly access the change records using standard interfaces like ODBC and JDBC or XML and JCA.

Using the Attunity Stream CDC solution enables ETL (extract, transform, and load) processes to run without bringing the site computer systems down. CDC enables the

What is the Attunity Stream CDC Solution 18-1

movement of only changes to data while the operational systems are running, without the need for a downtime window. The CDC architecture consists of the following components:

CDC agents, which are located on the same computer as the changes to be captured. Each agent is customized for the specific data source on the specific platform. A CDC agent provides access to the journal (or logstream), to read the journal for specific table changes. The agent maintains the last position read in the journal (the stream position or context) and starts at this point the next time it polls the journal. The context is stored in the repository where the agent adapter definition is stored. The adapter definition includes a single primary interaction which is used to access the appropriate journal and includes the list of tables to monitor for changes.
Note:

In addition to this interaction, secondary interactions are defined during runtime, based on the table metadata of each table specified as part of the change data capture solution.

Depending on the agent, transactions are supported. If transactions are not used, then auto-commit is used.

A CDC staging area: Changes are stored in a staging area, enabling a single scan of a journal to extract the details of changes to more than one table and to also enable filtering of the changes. A committed change filter is currently available and a redundant change filter (hot-spot optimization) is planned. Both of these filters are described in Tracking Changes - Auditing. The staging area is described in more detail in The Staging Area. CDC data sources, to enable access to the changes using industry standards such as ODBC and JDBC. A CDC data source uses Attunity metadata and holds metadata for tables marked for change data capture. The CDC data source points to either the CDC agent or the staging area to enable getting the changes, as events, using ODBC or JDBC. Additionally, the CDC data source converts the data types in the table with changes to be captured to standard ODBC data types.

An audit of the changes that are captured. The audit is stored on disk and changes captured by both the CDC agent and written to the staging area are recorded.

Once a CDC solution has been defined, the changes can be pushed to the consumer, using a standard Attunity Connect event router. Setting up and using an event router is described in Attunity Connect Reference.

The Attunity Stream CDC Architecture


The Attunity Stream CDC solution operates with or without a staging area. When a staging area is used, the change data capture agent reads the journal and channels requested changes from the journal to the staging area. The change data capture agent resides on the same computer as the data source with changed data to be captured. The staging area can reside on any computer. If the consumer application is SQL-based, a change data capture data source is available to access the relevant changes written to the staging area. The change data capture data source can reside on any computer.

18-2 AIS User Guide and Reference

The CDC architecture is shown in the following figure:


Figure 181 Change Data Capture when a Staging Area is used

When a staging area is used, the change data capture agent reads the journal and channels requested changes from the journal to the staging area. The change data capture agent resides on the same machine as the data source with changed data to be captured. The staging area can reside on any machine. If the consumer application is SQL-based, a change data capture data source is available to access the relevant changes written to the staging area. The change data capture data source can reside on any machine. A context is stored both for the staging area and the agent which mark the last point where changes were captured. For details about the context, refer to The Staging Area.
Figure 182 Change Data Capture when a Staging Area is not used

When a staging area is not used, the change data capture agent reads the journal and channels requested changes from the journal either to a JCA or XML-based consumer application or, if the consumer application is SQL-based, to a change data capture data

What is the Attunity Stream CDC Solution 18-3

source that is made available to access the relevant changes. The change data capture agent resides on the same machine as the data source with changed data to be captured. The change data capture data source can reside on any machine. A context is stored for the agent. For details about the context, refer to The Staging Area.

The Staging Area


The staging area is an area used by Attunity Stream to store captured data from a journal. When capturing changes from more than one table, without a staging area, the journal is scanned by the agent once for each table. When a staging area is used, the journal is scanned once and changes for every required table read during that scan are passed to the staging area, where they are stored. Thus, the journal is scanned once each time it is polled. Furthermore, once the changes have been written to the staging area, processing of these changes is performed independently of the journal. Another benefit of using the staging area is when transactions are used. The changed data is not written to the change queue until the transaction is committed. Thus, if the transaction fails, there is no overhead of having to back out any processing done with the steps in the failed transaction. The staging area can be on any Windows platform running Attunity server and not necessarily on the same server as the CDC agent. Once the information has been extracted from the journal and written to the staging area, processing of changes is performed on the staging area only. Thus, the staging area should be setup to consider the network configuration and where the consumer application runs. The staging area maintains the last position read by the consumer application (the staging area context) and starts at this point the next time a request from the consumer application is received. The context is stored in the repository where the staging area is maintained. The staging area is indexed. Thus, access to the staging area for a specific stream is quick. Use of the staging area is recommended in the following situations:

When changes to data in more than one table needs to be captured. When transactions are used. The changed data is not written to the change queue until the transaction is committed. Thus, if the transaction fails, there is no overhead of having to back out any processing done with the steps in the failed transaction. When repositioning the stream position (resetting the context) is planned to be performed often.

The staging area is cleared by default every 48 hours. All events that have been in the staging area for more than 48 hours are deleted.

Handling Before and After Images


Where applicable, when setting up a CDC solution, you can specify if you want to save the before image information of changed data that is recorded in the journal as well as the after image (undo and redo records for DB2 on the z/OS platform) information. The default is that only the after images are captured. The following table shows what is captured if before images are requested:

18-4 AIS User Guide and Reference

Table 181 Operation INSERT UPDATE DELETE

What is Captured Before Image X P P After Image P P X

To capture before images, the journal must be set up to include before images, as described for each type of journal.

Tracking Changes - Auditing


When using a CDC solution you can produce an audit trail of the captured data. The following audit levels are available:

None: Indicates no auditing is performed. Summary (Statistics): The total number of records retrieved from the change queue and system and error messages are reported. In addition, header information about each record captured, such as the type of operation and the table name, is reported.
Note:

Details of the header information is provided in the CDC agent specific chapters.

Detailed: The total number of changes retrieved are reported, along with system and error messages. In addition, header information and record information about each record captured is reported. The audit entries can be viewed in Attunity Studio Runtime Manager perspective in the Event monitor. Entries include a direction as follows: Entries from the CDC agent: The audit entries show what data changes were extracted from the agent by the consumer application. Entries from the staging area: The audit entries show what data changes were extracted from the staging area by the consumer application and what entries were written to the staging area by the agent.

Security Considerations
In general, Attunity Stream relies on the security mechanisms implemented at a site. For example, on az/OS system using RACF to manage security, the security rules implemented in RACF are also applied to Attunity Stream CDC. Additionally, you can specify as part of the CDC setup who can access the CDC agent and staging area to extract changed data and who can write changed data to the staging area.

What Can Be Captured?


Attunity Stream includes the following data source CDC agents:

DB2 Journal CDC on z/OS systems: The change data capture agent monitors both the archived and active DB2 journals and captures changes made to specific tables, which are written to these journals. Since transaction information is also stored,
What is the Attunity Stream CDC Solution 18-5

the committed change filter can be used to ensure that only committed changes are captured. For details of the committed change filter, refer to Tracking Changes Auditing.

DB2 Journal CDC on OS/400 platforms: The change data capture agent monitors a DB2 database journal and captures changes made to specific tables, which are written to this journal. Since transaction information is also stored, the committed change filter can be used to ensure that only committed changes are captured. For details of the committed change filter, refer to Tracking Changes - Auditing. DISAM on Windows platforms: The change capture agent monitors a journal for changes in DISAM tables and captures changes made to specific tables, which are written to this journal. This solution can only be used when the DISAM data is updated using Attunity Connect and not when updated directly by another program. IMS on z/OS systems: The change capture agent monitors a system log for changes in IMS/DB tables and captures changes made to specific tables, which are written to the logstream. Oracle on Windows and UNIX platforms: The change data capture agent monitors Oracle REDO log files for changes in Oracle tables from Oracle version 9iR2. The CDC solution polls the Oracle Logminer with its archive mode to capture changes. The staging area must be used when capturing Oracle data changes. VSAM-Batch on z/OS systems: The change data capture agent monitors a system log for changes in VSAM tables. Transactions are handled at the program level of the program. If a program fails, a decision to rollback must be made independently of the VSAM-batch change data capture agent. VSAM-CICS on z/OS systems: The change data capture agent monitors a CICS logstream for changes in VSAM tables. When transaction information is not available, rolled-back transactions appear as a set of backed-out changes, applied to the data. Query-based CDC on all Attunity server platforms: A generic change capture agent that enables the capture of changes in any of the data sources supported by Attunity Connect. The query-based agent only captures changes based on changes to a specific field in the table. An initial value is specified for the field as the starting change data capture context. The query-based agent does not include the ability to specify a staging area, nor to reset the context.

18-6 AIS User Guide and Reference

19
Implementing a Change Data Capture Solution
This section contains the following topics:

Overview CDC System Architecture Setting up AIS to Create a Change Data Capture CDC Adapter Definition CDC Streams Transaction Support Troubleshooting

Overview
The CDC solution lets you identify changes to data. This is referred to as consuming changes. The consumed changes are reported in a special log file. With CDC, the data is consumed in real time, that is at exactly the time the INSERT, UPDATE, or DELETE operations occur in the source tables. Changes are stored in change tables. Attunity Stream uses CDC (Change Data Capture) agents to consume changes to data that they display when needed. The following agents are supported:

VSAM Under CICS CDC (on z/OS) VSAM Batch CDC (z/OS Platforms) IMS/DB CDC on z/OS Platforms Adabas CDC on z/OS Platforms Adabas CDC on UNIX Platforms Adabas CDC for OpenVMS DB2 CDC (z/OS) DB2 CDC (OS/400 Platforms) Enscribe CDC (HP NonStop Platforms) Microsoft SQL Server CDC Oracle CDC (on UNIX and Windows Platforms) Query-Based CDC Agent (for work tables)

Implementing a Change Data Capture Solution 19-1

SQL/MP CDC on HP NonStop

Setting up AIS to Create a Change Data Capture


You set up a change data capture in Attunity Studio. This provides a single, integrated GUI where you can easily configure a Change Data Capture. The workflow required for setting up a CDC solutions is:
1.

Install the System Components in the proper locations. See Installing System Components for an explanation on what to install and the locations for installing the components.

2. 3.

Apply the license. Use Attunity Studio to apply the license to the machines defined in your system. Configure the Change Data Capture solution. Use the Solution perspective in the Attunity Studio to add a new CDC solution and define its configuration properties. This automatically configures all the necessary components for the Change Data Capture, including the data source driver and CDC agent. See Creating a CDC with the Solution Perspective for more information.

See Configuring the System for Change Data Captures (Using Studio) for additional information.

Installing System Components


To use AIS for a Change Data Capture solution, install the following:

AIS on the backend. The backend of the system is where your data is stored. In an CDC solution, you consume changes from data sources such as Oracle, to create a log with the change data information. This is where your CDC agent resides. You must install the full (thick) version of AIS on the backend. For information on how to install AIS, see the Installation Guide for the platform you are working with. Data Source: The data source that you are working with is installed with AIS. For more information, see the installation guide for the platform you are working on. Attunity Studio: Attunity Studio can be installed on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide or the platform you are working on.

See also CDC System Architecture.

Configuring the System for Change Data Captures (Using Studio)


You must configure the following system components to set up you system for application access:

Machines: You must configure the machines used in the system. Make sure to configure the machine where your backend application and application adapter reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio. User Profiles: You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio.

19-2 AIS User Guide and Reference

CDC Agents: You need to set up the CDC agent that you want to use. Attunity Studio lets you set up a CDC agent by Creating a CDC with the Solution Perspective. This lets you use Attunity Studio to manually set up your own CDC. Data Sources: If you create the CDC agent manually, you need to include the data source that your CDC agents supports. To set up your data source, you must: Prepare to use the data source by Adding Data Sources to Attunity Studio. Make sure that your data source is responding correctly by carrying out the procedures in Testing a Data Source. For non-relational data sources, you must define the metadata. Data source metadata defines the table structure for your data source. All metadata is imported from relevant input files (such as COBOL copybooks) and maintained using the Attunity Studio Design perspective Metadata tab. For more information on setting up data sources, see Data Sources and the data source Reference section for the adapter you are using.

Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not indicated in the Attunity metadata.

Generated CDC Components


The following are generated when you set up a CDC using the Solution perspective.

A new binding for the change data capture components on the machine where the data source with changes to be captured resides. CDC components are defined under this new binding, which is named name of project_ag, if the agent and the staging are are on different machines. If the agent and staging area are on the same machine, bindig is called name of project_router. A CDC agent, which is defined as an adapter under the CDC binding and named name of project_ag. A CDC data source, which is on the same machine as the CDC agent. This data source is called name of project. A staging area. This includes a binding and a data soruce. Both are called name of project_sa.The staging area is defined on the machine where the CDC changes are stored. A CDC router. This includes a binding, a data source, and an adapter. The binding and the adapter are called name of project_router and the data source is called name of project_sa. The router data source is a copy of the staging area s data source. For each of the bindings defined in the CDC Solution, a workspace with the same name is defined.

Handling Arrays Defined in the Source Data


The CDC solution stores changes in XML and does not enable capturing virtual tables. To work with these tables, they need to be flattened. By default, the data source used in Attunity Stream is defined with the arrays flattened.

Implementing a Change Data Capture Solution 19-3

When using the CDC solution, the built-in flattening mechanism enables the following:

A view for the parent table records in the non-relational structure. This view does not include reference to the related child-arrays. A view for each child array that includes a unique key from the parent table and an array row number. This enables adding further columns from the parent table to the virtual array view.

CDC System Architecture


The CDC solution system components include software that is installed in your system. These components are described in Installing System Components. In addition, various files are created, such as the change data tables that keep track of the consumed changes. The CDC solution is usually deployed on three platforms. A platform can be one or more servers. The following figure shows the CDC system architecture.
Figure 191 CDC Solution System Components

This figure shows the three platforms used to deploy a CDC solution. These platforms are:

Database Platform: The database platform is where the database and the agent run. This platform can be any platform supported by AIS. The database platform is, in many cases, also a legacy application platform. This means that processing overhead on this platform should be minimized. Staging Area Platform: The staging area platform is where the change tables are hosted. This platform also hosts the SQL based change router, which enters data into the change tables. This platform can be any Windows or UNIX platform.

19-4 AIS User Guide and Reference

ETL Platform: The ETL platform is where the ETL tool runs. This platform can be any platform supported by AIS as well as any platform that can run any of AIS thin clients (such as any standard Java platform). The ETL platform may be the same as the Staging Area platform depending on the ETL tool resource requirements. If the platform is strong enough, placing both the staging area and the ETL tool on the same platform can reduce network utilization and improve throughput.

CDC Adapter Definition


The CDC adapter definition is returned in a standard XML format. This format is based on the Attunity ACX Protocol. The CDC adapter definition is composed of two parts, the interactions, which are static and stored in the Attunity Studio repository and the schema, which is dynamic and is created at runtime. The following figure shows the parts of the CDC definitions metadata.
Figure 192 CDC Definition

The figure above shows the adapter metadata as viewed in Attunity Studio. This shows the interactions only. You can view the schema by Using the Attunity XML Utility. The following figure shows how to configure the XML utility to return the full definition including the schema.
Figure 193 XML Utility Metadata Button

Implementing a Change Data Capture Solution 19-5

CDC Agent Metadata Definition Description


This section describes the elements in the CDC agent metadata. This metadata has two elements.

Interactions Schema

Interactions
The following are the outbound interactions in the CDC agent metadata:

getEvents: Gets the CDC events from the agent. setStreamPosition: Moves the stream position back to a specified position to re-read events from that point.

The following are the inbound interactions in the CDC agent metadata. These events are not callable by a client application.

eventStream table-name
Note:

The schema records of these interactions are important for client apps because they describe the metadata of the change events.

Schema
The following are the schema elements in the CDC agent metadata:

header: This accompanies all events. It is the same for a specific CDC agent. A list of header fields is included in the CDC agent reference sections. For more information, see the Overview for a list of the CDC agents. eventStream: An implicit union of all CDC event types. This includes all tables plus transaction events. *table-name* - A record for each table including all its fields.

CDC Streams
A Change Data Capture tracks or consumes changes in a data source using streams. A stream is a point in a data bases change log that is used as a point of reference by the CDC agent. Each time you begin to start a Change Data Capture, a new stream, with a unique name is automatically created. You can have as many concurrent streams as necessary. The place where the changes are currently being consumed is called the stream position. The stream position is automatically saved in the data base. You can filter a stream to consume for selected tables in the data base or for all of the tables. The figure below shows the CDC Stream.

19-6 AIS User Guide and Reference

Figure 194 CDC Stream

The following are the parts of the CDC stream in the figure above.

Data Source: The native data source where you are consuming changes. For example, Adabas. Change Logger: This is the native tool that logs changes for a data source. Change Source: This is the data source log. It contains various information depending on the data source and the log configuration. This part has a different name depending on the data source you are working with. For example, for Adabas this is called the PLOG, for DB2 it is called a journal, and for Oracle it is called REDO logs. For more information, see the Overview for a list of the CDC agents. Change Capture Agent: This is the Attunity CDC Agent. The agent consumes the changes from the Change Source and outputs them into a standard XML format.

Transaction Support
CDC agents that support transactions can eliminate uncommitted changes from the stream service Most CDC agents support transactions. The reference section for each CDC agent indicates whether the agent supports transactions. The following CDC agents support transactions:

DB2 and DB2/400 VSAM Batch Adabas (all Adabas agents) Oracle Enscribe SQL/MP

When a CDC agent handles a transaction it reads the change record and then:

If the record is from a new transaction it creates an in memory transaction record cache for Txn-ID If the record is a DML record, then it is added to the appropriate transaction record cache by the Txn-ID If there was a Rollback, it deletes the Txn-ID transaction cache If there was a Commit, it distributes the Txn-ID transaction records to the various change files It the transactions is idled for too long it is timed out This is written to a special Txn-ID.txe file

For information on troubleshooting, see Troubleshooting in AIS.

Implementing a Change Data Capture Solution 19-7

Troubleshooting
If changes are not written to the journal or other native Change Source, make sure that the journal and system (such as CICS) file definitions match (for example, the same logstream number in CICS is used in the journal). To check that changes are written in the journal 1. From the Windows Start menu, point to Programs, Attunity and then Server Utilities, then click XML Utility. The XML Utility is displayed.
2. 3. 4.

Select the relevant server, workspace and CDC agent adapter. Click Events to open the Events listener. Click Start Events in the Events listener. You can specify the event name and qualify it with an application name to provide a unique starting point for the change events. Update a table with data to be captured (for example, using NAV_UTIL execute).

5.

The captured event is displayed in the XML Utility Events listener window. If changes are written to the journal, set the Attunity server environment properties driverTrace and acxTrace in the debug section for the CDC binding. Make changes to the data source and analyze the resulting standard Attunity server logs.

19-8 AIS User Guide and Reference

20
SQL-Based CDC Methodologies
This section contains the following topics:

Overview Configuration Parameters SQL Access to Change Events Reading the Change Tables Referential Integrity Considerations Monitoring the Change Data Capture Error Handling subscribeAgentLog Configuration Properties Performance Considerations Capacity Planning Applying Metadata Changes Migration from XML-based CDC

Overview
When using ETL (Extract Transform and Load) tools such as Informatica Power Center, Ascential Data Stage, BO Data Integrator, Microsoft SSIS and others, SQL based CDC can be used to retrieve incremental source table changes and apply them to the target tables. An SQL-based Change Event Router is used to read changes off a CDC agent and write these events into multiple change files (implemented as DISAM files). The following diagram shows the components related to SQL-based Change Event Router.

SQL-Based CDC Methodologies 20-1

The SQL-based CDC Change Router components is displayed in the following figure:
Figure 201 SQL-based Change Router Components

The system has three platforms, as follows:

Database platform: This is where the database and agent run. It can be any platform supported by AIS. The database platform is, in many cases, also a legacy application platform, which means that processing overhead on this platform should be minimized. Staging area platform: This is where the Change Table (Change File) are hosted. This platform also hosts the SQL-based Change Router which enters data into the change tables. It can be any Windows or UNIX platform. ETL platform: This is where the ETL tool runs. It can be any platform supported by AIS, as well as any platform that can run any of AIS thin clients (e.g., any standard Java platform). The ETL platform may be the same as the Staging Area platform, this depends on the ETL tool resource requirements. In general, if the platform is strong enough, then placing both the staging area and the ETL tool on the same platform can reduce network utilization and improve throughput.

Components
This section describes the various system components.

CDC Agent
The task of the CDC agent is to read the database logs (or other change source), and to transform them into a format that is easy to process. The agent does not initiate any process on its own. It serves requests for change events from various clients. In the case of SQL-based CDC, the agents client is the SQL-based Change Router. The agent represents the changes as a stream of change events with a stream position (also known as context) pointing to the next change event to be retrieved.

20-2 AIS User Guide and Reference

On the first call in a specific application context, the agent uses the initial stream position that is specified in the agent configuration. The recommended setting for the initial stream position is Now which indicates that the first retrieval by an application will determine the time after which changes are captured.

SQL-based Change Router


The SQL-based change router is a server that reads change events off a CDC agent on the database platform and writes these events into multiple change files (DISAM files) on the staging platform. The SQL-based change router main characteristics are:

Uses memory aggressively to cache change events before they are committed. When dealing with very large transactions, the change router writes uncommitted transaction data to a temporary sequential file that is read back and processed if Commit is given, or discarded on Rollback. Maintains a stream position against the agent to keep track on processed change events.

Staging Area Server


This is a standard AIS server which provides SQL access to the DISAM Change Table (Change File) set, which make the Staging Area. The staging area server configuration depends on how the change tables are read by the ETL tool.

Configuration Parameters
The following table describes the SQL-based change event router configuration parameters:
Table 201 Parameter cdcDatasource eliminateUncommittedChan ges Event Router Configuration Parameters Type string Boolean false Default Description The Change Data Source. A local DISAM data source. When set to true, only committed change records are moved to the Change Table (Change File). If false, all change records are moved to the change tables (in which case, memory usage is minimal) hence the change table may contain rolled back data. For most agents, following the RI considerations (see Referential Integrity Considerations) results in rolled-back changes eliminated naturally by means of compensating change records generated by the agent in case of a rollback. Consult the respective CDC agent documentation for details. Indicates how long change records are kept in change tables within the Staging Area. After the indicated time, change records are deleted. You can set a value between 0 and 50000. A value of 0 means that the records are never deleted. A value of 1 indicates that the records are kept for one hour. logLevel enum none One of none, api, internalCalls, info, or debug.

eventExpirationHours

int

48

SQL-Based CDC Methodologies 20-3

Table 201 (Cont.) Event Router Configuration Parameters Parameter maxDeletedEventsInBatch Type int Default 500 Description Controls how many expired change records are delete in a single pass. This number may need to be lowered in some rare cases in order to reduce latency when a large number of change events is continuously being received. Controls the number of physical files opened by the event router. Specifies how much memory can be stored in memory per transaction before it is off-loaded to disk. This number should be higher than the average transaction size so that the slower-than-memory disk is not used too often. Specifies how much memory in total can be used for storing active transactions (ones that have not yet committed or rolled back). Connection information to the CDC agent.

maxOpenfiles maxTransactionMemory

int int (in Kb)

200 1000

maxStagingMemory

int (in Kb) Structure: string string string int int int boolean string

100000

sourceEventQueue

server workspace adapter eventWait maxEventsAsBlocks reconnectWait fixedNat

30 250 15 false Specifies the directory where the staging area change files will be stored. This directory also stores off-loaded transactions as well as timed-out transactions and error files. 3600 Specifies how long can a transaction be active without getting new events. This parameter should be set according to the corresponding setting of the captured database. In particular, this setting must not be lower than the databases transaction time-out setting as this may lead to the loss of transactions.

stagingDirectory

transactionTimeout

int (in seconds)

useType subscribeAgentLog

enum Boolean

sqlBbased This parameter must be set to this value. Cdc False When set to true, the change router writes the contents of the CDC agents log into its own log. Do not set this property to true if the logLevel property is set to debug because the large amount of information that is sent in this mode will cause performance delays.

SQL Access to Change Events


To implement incremental change data capture (as opposed to full reloads), ETL tools access changes using SQL. This section describes how the ETL tool consume these changes.

20-4 AIS User Guide and Reference

Change Tables
For every table in the Change Data Source, a change table by the same name is maintained by Attunity Stream in a change data source (DIFDB, a DISAM data source). A change table contains the original table columns, and CDC header columns, as listed in the following table:
Table 202 Change Event Common Header Columns Structure Data Type string string operation string Description The original change timestamp. The name of the table. beforeImage | update | insert | delete. The source specific transaction ID of the change. For auto-commit operations, it may be empty (or 0). The change record stream position in the Staging Area. The column is defined as primary unique index. It is a 32-bytes string with the following structure: <yyyymmdd>T<hhmmss>.<nnnn><cccccc> Where:

Column Name timestamp table_name operation transactionID

context

string (32)

<yyyymmdd>T<hhmmss> is the commit processing timestamp as generated in the staging area when starting to process the Commit event. <nnnn> is a unique number to differentiate between transactions committed during the same second (up to 99,999 are assumed). <cccccc> is a counter for the change events in the transaction making every stream position unique (up to 9,999,999 are assumed).

agent_context

string (64)

The original change record stream position from the agent (non-numeric). This column is defined as alternate, descending unique index. It is used for the following:
1. 2.

It ensures that a change event does not appear more than once in the change table. It allows scanning of a change table backwards, peeking easily for the last N change events. When working with complex records, multiple records may result from a single back-end change record. This column enables the user to associate these records with the single change record.

3.

In addition to the common CDC header columns, each change table may maintain auxiliary CDC header fields specific to the source data source CDC agent. Including these fields in the Change Table (Change File) is optional. The following table lists auxiliary CDC header fields for different data source types:

SQL-Based CDC Methodologies 20-5

Table 203 Data Source DB2 Oracle VSAM CICS VSAM CICS DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 VSAM Batch VSAM Batch VSAM Batch VSAM Batch VSAM Batch VSAM Batch ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF

Agent Specific Auxiliary Header Columns Data Type string (16) string (24) string (4) string (8) string (1) string (2) string (10) string (10) string (6) string (10) string (10) string (10) string (10) string (10) string (10) string (8) int4 int4 string (1) string (8) string (8) string (8) string (8) string (8) string int4 string (8) string (2) string (2) string (16) string (56) uint2 uint4 int4 uint4 Description Original change record block address in the database log. Row ID of affected record. Terminal ID of the transaction that made the change. Task ID of the transaction that made the change.

Column Name RBA rowID terminalID taskID journalCode entryType jobName userName jobNumber programName fileName libraryName memberName RRN userProfile systemName referentialConstraint trigger objectNameIndicator jobName programName userName stepName procedureStepName programStartTimeStamp sequence BEFZ indicator recordType userID generalUniqueID fileNumber RABN imageType workRabChain

20-6 AIS User Guide and Reference

The STREAM_POSITION Table


The SQL-based CDC approach relies on an auxiliary table to maintain the current CDC stream position for every change table. The recommended structure of the STREAM_POSITIONS table is as follows:
Table 204 STREAM_POSITION Table Structure Data Type string (64) string (64) String (32) Description The application for which the stream position is kept. The table for which the stream position is kept. The last recorded stream position for the table and application.

Column Name application_name table_name context

This tables primary unique key is the concatenation of application_name + table_name. The use of this table is not mandatory but it is part of the recommended use pattern of SQL-based CDC. The STREAM_POSITIONS table is created with the following definition (where filename is changed into an actual path):
<?xml version='1.0' encoding='UTF-8'?> <navobj> <table name='STREAM_POSITIONS' fileName='<staging-directory-path>STREAM_POSITIONS' organization='index'> <fields> <field name='application_name' datatype='string' size='64'/> <field name='table_name' datatype='string' size='64'/> <field name='context' datatype='string' size='32'/> </fields> <keys> <key name='Key0' size='128' unique='true'> <segments> <segment name='application_name'/> <segment name='table_name'/> </segments> </key> </keys> </table> </navobj>

The general preference would be for this table to be stored inside the target database which will allow committing changes along with an update of the stream position under the same transaction. However, with local access, using a STREAM_POSITIONS table in the Change Data Source introduces very limited risk.

Reading the Change Tables


ETL tools query for changes in a selected Change Table (Change File). For each table and, potentially, for specific application, the tools keep a stream position (also called context) that is maintained between runs.

SQL-Based CDC Methodologies 20-7

Note:

Some ETL tools stop consuming changes when an EOF (End of File) is returned. To force an ETL to contiguously consume changes, see Reading Change Tables Continuously.

The kind of SQL query that the tools use have the following structure:
select * from change-table where context > :last-stream-position

where last-stream-position parameter represent the last processed event as maintained by the ETL tool for the specific change table and application. While some ETL tools support this stream position concept internally, with others this requires manual implementation. This chapter specifies the manual steps required for reading change logs, leaving tool-specific information for tools-specific documents. The general procedure for consuming change events for change table T in application A using SQL is outlined below. Note that this procedure does not address Referential Integrity constraints (see Referential Integrity Considerations). To create a stream position This is a one-time setup step aimed to create a stream position record for T + A in the STREAM_POSITIONS table. The following SQL statement creates that record:
insert into STREAM_POSITIONS values (A, T, ); 2.

1.

This step is where change data is actually read. It occurs on each ETL round.
select n.* from T t, STREAM_POSITIONS sp where sp.application_name = 'A' and sp.table_name = 'T' and n.context > sp.context order by n.context;;

This query retrieves change records starting from just after the last handled change record. Obviously, n.* can be replaced with an explicit list of columns. What is important is that the context column must be selected as this is the change record stream position which is required for the next step. This step occurs at the end of each ETL round once all change records were retrieved and processed. Lets assume that the value of the context column of the last change record was C. This value needs to be stored back into the STREAM_POSITION table for the next ETL round. This is done with:
update STREAM_POSITIONS set context=C where application_name A and table_name = T;

This value can be stored more frequently during the ETL process as needed. The general guideline is that once change record data has been committed to the target database, the stream position should be updated as well.

Reading Change Tables Continuously


There are several methods for consuming changes stored in the AIS staging area. Many ETL tools work by polling the change tables in the staging area for changes made since the last ETL run. This is executed with an SQL statement, for example:
select * from INVENTORY_CHANGES where CONTEXT > ?

20-8 AIS User Guide and Reference

In this case, the parameter value is the last value returned in the previous ETL run and is stored in an auxiliary table of a target database. This approach works well for polling cycles that are not too close (up to several times an hour). However, in cases where a near real-time SQL-based change capture is required, this approach is not adequate. The cost of starting an ETL run is often high. The cost is even greater if several change tables need to be monitored concurrently in near real time. To compensate for the higher costs a continuous CDC approach is used. In this case, a continuous query uses a special feature of the Attunity query processor that defines a query that (almost) never ends. Internally Attunity Stream polls the data by re-executing the query as needed but with the least possible overhead. For an ETL tool that uses a data access API such as ODBC, it just appears as if the query never ends. If no data is available, the ETL tool will just be blocked in a call to fetch the next row of data. When new data becomes available, the ETL continues receiving rows back from the fetch call. The continuous CDC approach requires one ETL to be continuously active for each consumed change table.

Executing a Continuous Query


The following query is used to execute continuous queries:
select CONTEXT as $$StreamPosition, * from INVENTORY_CHANGES where CONTEXT > ?

The parameter value in this statement is the last value returned in the previous ETL and stored in an auxiliary table of the target database. This is the same as the statement used for executing regular change queries as explained in Reading Change Tables Continuously. The only difference is that the CONTEXT column in this statement uses the alias '$$StreamPosition'. Once the query starts with an initial value for the parameter, the alias name instructs the query processor to read until the end of the data and then run the query again with the last CONTEXT value as the new parameter value.
Notes:

To ensure that the continuos works correctly:

The Stream Position column must be the first column on the SELECT list, with the alias name '$$StreamPosition'. The matching stream position parameter must be the first parameter in the WHERE clause.

Continuous queries can be invoked from ODBC, JDBC and ADO.NET interfaces.
Note:

Using continuous queries under the ADO OLE DB interface may cause problems because of the OLE DB read-ahead behavior. In this case, when a query is made, more records than the ones requested are fetched. This causes the system to continue to wait for information that is not there because it was fetched early.

Stopping and Pausing a Continuous Query


When you need to stop making a query, you must stop the continuous query. You can pause and stop the query using different values in a Control Command column. The following are the values that are used to control a continuous query:

SQL-Based CDC Methodologies 20-9

(empty): If the column value is empty, the query processor continues to return rows as they become available. Pause: The query processor pauses the continuous query until this value is changed. After a retry interval, the query is re-executed and the value of the Control Command column is checked again. Stop: The query processor stops running the continuous query and returns an EOF.

The following query demonstrates the use of a control command column:


select t.context AS $$streamPosition, sp.command AS $$controlCommand, t.*, sp.* from T t, STREAM_POSITIONS sp where sp.application_name = 'A' and sp.table_name = 'T' and t.context >= ? order by t.context limit to 1000 rows;

In this example, table T is the change table and STREAM_POSITIONS is the table used to record the last position processed in the previous ETL run. The STREAM_ POSITION Table contains a column named command that gets the alias name $$ControlCommand. The STREAM_POSITION table must contain a row for the application and table before running this query. To stop this query while it is running continuously, run the following query from a different session:
update STREAM_POSITIONS set command='Stop' where sp.application_name = 'A' and sp.table_name = 'T'

When you use a control command column, make sure that the:

The control command column is the second column in the SELECT list of the query, immediately following the stream position column. The condition sign used in the stream position column is > + (greater than or equal to). Do not use the > (greater than) sign. This is because when no rows are returned, the control command column is also not be returned. Note: If the change table is empty, there may be a problem stopping the continuous query.

The clause limit to 1000 rows is included. If the query returns very long number of rows without re-executing, it may take it very long to re-evaluate the control command column. By setting a limit on the number of rows returned for each query, the control command column value is re-evaluated more often. The value, 1000 rows, is used as an example. You can use a higher of lower number.

The control command value can come from another table, or it can be calculated. For example, you can set its value based on the time of day (for example, if may return a value of Pause between 2:00 PM and 4:00 PM). When a continuous query is used with a thin-client (JDBC or ADO.NET), the polling takes place on the server and it may take a long time before a response is returned to the client. The server keeps monitoring the client connection so if the client goes away, the server will automatically stop the continuous query at the next poll event.

20-10 AIS User Guide and Reference

Note:

Continuous queries are subject to normal query processing rules. Make sure that the SQL that is used returns the result in the order of the stream position column. A single-segment unique index on the column and an express sort condition is one way to ensure this without performance loss.

Continuous Query Properties


The following environment settings are used to control the behavior of continuous queries:

continuousQueryRetryInterval continuousQueryTimeout continuousQueryPrefix

For more information on these properties, see Environment Properties, Query Processor.

Referential Integrity Considerations


Some related tables have Referential Integrity (RI) constraints enforced on them. For example, with OrderHeader and OrderLines one cannot have OrderLines without an associated OrderHeader. When processing change events by the table (which is how SQL-based CDC works) as opposed to by transaction, referential integrity cannot be maintained properly. For example, when first handling all OrderHeader records and then all OrderLines records then a deleted OrderHeader may be applied long before the required delete of the associated OrderLines records. In order to reduce the potential referential integrity to a known time frame after which referential integrity is restored, a somewhat different process is needed (compared with Reading the Change Tables). A special SYNC_POINTS table should be added to maintain a common sync-point for use with multiple related tables. The table is defined as follows:
Table 205 SYNC_POINTS Table Structure Data Type string (64) string (64) String (32) Description The application for which the processing is done. The name of the synchronization point A stream position that can be safely used as an upper bound for event retrieval of all related tables

Column Name application_ name table_name context

This tables primary unique key is the concatenation of application_name + sync_name. The use of this table is not mandatory but it is part of the recommended use pattern of SQL-based CDC. The SYNC_POINTS table is created with the following definition (where filename is changed into an actual path):
<?xml version='1.0' encoding='UTF-8'?> <navobj>

SQL-Based CDC Methodologies

20-11

<table name='SYNC_POINTS' fileName='<staging-directory-path>SYNC_POINTS' organization='index'> <fields> <field name='application_name' datatype='string' size='64'/> <field name='sync_name' datatype='string' size='64'/> <field name='context' datatype='string' size='32'/> </fields> <keys> <key name='Key0' size='128' unique='true'> <segments> <segment name='application_name'/> <segment name='sync_name'/> </segments> </key> </keys> </table> </navobj>

The following procedure describes how to ensure RI is regained at the end of a group of ETL rounds. It is an extension of the procedure described earlier for consuming change records. Here we assume that tables T1, T2 and T3 are related with RI constraints and that A is the application we are working under. To create a stream position 1. This is a one-time setup step aimed to create a stream position record for T [1/2/3] + A in the STREAM_POSITIONS table. The following SQL statement creates that record:
insert into STREAM_POSITIONS values (A, T1, ); insert into STREAM_POSITIONS values (A, T2, ); insert into STREAM_POSITIONS values (A, T3, ); 2.

This step is performed at the beginning of a group of ETL rounds processing (that is before starting to process change events for T1, T2 and T3). The goal here is to get a shared sync point for retrieval of T1, T2 and T3. This is done by sampling the context column of the SERVICE_CONTEXT table. This value is the stream position of the last change record in the most recently committed transaction. This is done as follows:
insert into SYNC_POINTS select 'A' application_name, 'T123' sync_name, context from SERVICE_ CONTEXT;

Here, T123 is the name chosen for the synchronization [points of tables T1, T2, and T3.
3.

This step is where change data is actually read. It occurs on each ETL round.
select n.* from T t, STREAM_POSITIONS sp, SYNC_POINTS sy where sp.application_name = 'A' and sp.table_name = 'T' and sy.application_name = sp.application_name and sy.sync_name = 'T123' and n.context > sp.context and n.context <= sy.context order by n.context;

Note that n.context <= sy.context is used because the context represents a change record to be processed and processing should include the change record associated with sy.context, too.

20-12 AIS User Guide and Reference

This query retrieves change records starting from just after the last handled change record but stopping at a common sync point. n.* can be replaced with an explicit list of columns, however it is important that the context column must be selected as this is the change record stream position which is required for the next step. This step occurs at the end of each ETL round once all change records were retrieved and processed for a table Ti. Lets assume that the value of the context column of the last change record was C. This value needs to be stored back into the STREAM_POSITION table for the next ETL round. This is done with:
update STREAM_POSITIONS set context=C where application_name A and table_ name = Ti;

This value can be stored more frequently during the ETL process as needed. The general guideline is that once change record data has been committed to the target database, the stream position should be updated as well.

Monitoring the Change Data Capture


You can monitor the Change Data Capture when it is deployed. This provides you with information about the CDC agents status, troubleshooting and tuning. This section contains the following topics that explain monitoring in a CDC.

Service Context Table Monitoring the Status

Service Context Table


A control table is maintained by the event router that reports its current state and other important statistics. It can be accessed with any tool that supports SQL access. The control table is called SERVICE_CONTEXT. This table has a single row with the following columns:
Table 206 Column Name context SERVICE_CONTEXT Table Structure Data Type Description string (32) The context value of the last change record in the most recently committed transaction. This value can be used to synchronize retrieval of transactions among different tables. This is the agent context that the staging area would return to if it were to restart for whatever reason. The agent context value is calculated as follows:

agent_context

string (64)

If there are pending uncommitted transactions, the agent_context value is the agent context of the first event of the oldest uncommitted transaction. If there are no pending uncommitted transactions the agent context of the last event of the most recently committed transaction, prefixed withnext and indicates that on recovery, the next event after that is to be processed.

The staging area maintains an internal agent_context that is more advanced than the one stored in the SERVICE_ CONTEXT table. The staging area uses memory to speed up change processing and when stopped it may revert back to an earlier agent context. The amount of extra work depends on the existence of long-running transactions.

SQL-Based CDC Methodologies

20-13

Table 206 (Cont.) SERVICE_CONTEXT Table Structure Column Name start_time status sub_status status_message status_time completed_transactions active_transactions timedout_transactions Data Type Description timestamp The time when the staging area started. string (16) string (64) string (80) Staging area status. For more information, see Monitoring the Status. A second level status. For more information, see Monitoring the Status. Message that is returned that describes the staging area status.

timestamp The time that the status is updated. uint4 uint4 uint4 Number of transactions processed. Number of transactions in progress (in memory, not yet committed or rolled back). Number of transactions that have timed out (were in memory for too long, declared to have timed out and written to a file). Number of rolled back transactions. Number of change events written out. Number of change events deleted from Change Table (Change File). Accumulated size in bytes of change records written. Current number of physically opened files by the staging area. Current number of logically opened files by the staging area. Amount of memory currently allocated for staging. A two-digit identifier with the same value as the nodeID config property when in multi-router mode. In regular mode, this column has no value. The time of the last transaction. The version number for the router. As of AIS Version 5.0, the value for this column is 1. This value is increased when major updates are added to the router. Total number of errors reported. The number of transactions reduced to disc. The number of compensation records captured.

rolledback_transactions uint4 processed_change_events uint4 deleted_change_events bytes_written opened_files opened_files_virtual memory_usage node_id uint4 uint4 uint4 uint4 uint4 uint4

last_transaction_ timestamp Version

string (26) uint4

errors Reduced_transactions compensation_records

uint4 uint4 uint4

The CONTROL table is also used by the event router to persist its state for purpose of recovery. This table must not be modified by the users.

Monitoring the Status


The following table describes the status for the CDC agents when they are running. The status is defined as a state in the SERVICE_CONTEXT table. The table describes the different statuses available for a CDC agent.

20-14 AIS User Guide and Reference

Table 207 State Active Sub State Processing

SERVICE_CONTEXT Status States State Details

Description

Reads the change The router is connected to the CDC agent and is processing or waiting for the change events events. Writes the change events Reduces the timed-out transaction to disc Deletes any expired change events

Idle

Waits for new change The routers agent reaches the end of its events journal and does not have any new change events. Detailed error text This indicates that the change router operation involving writing to disk failed. The most common reason is not enough disk space. Other reasons such as permissions, wrong path, or locking can also cause this. The prefix component (Agent/Router) indicates where the error happened. The error in the sub_state column identifies the error.

error

router.discWriteError

component.error This error type occurs in agents and routers. The following are the errors that are returned for this error type:

Detailed error text

xmlError requestError noActiveConnection resourceLimit noSuchResource authenticationError noSuchInteraction noSuchConnection notImplemented autogenRejected resourceNotAvailable authorizationError configurationError noSuchStream temporarilyUnavailable dataError interventionRequired Detailed error text This indicates that the change router operation with the CDC agent failed and cannot be restored.

Disconnected

SQL-Based CDC Methodologies

20-15

Table 207 (Cont.) SERVICE_CONTEXT Status States State Paused Sub State N/A State Details Description Operator manually paused the change router using the sqlrtr_pause control reset. This indicates that the change router is not Down message (orderly shutdown or running. abort message)

Down

N/A

Error Handling
Errors can occur in a Change Data Capture in either the source machine or the staging area. Most of these errors are passive and handled automatically. The change router is an active component that reads the change events and applies them to a target. AIS lets the user determine how to handle errors that occur in the change router. The following topics describe error handing for a CDC.

Determining the Change Router Error Behavior CONTROL_TABLE

Determining the Change Router Error Behavior


Change routers are active components that read change events over a network connection and apply them to a target. When an error occurs, the cause of the error and relevant details are logged to a log file. In addition, AIS provides error descriptions about the error. These error descriptions are handled by the status columns in the SERVICE_CONTEXT table. See Monitoring the Status for more information about the status columns. AIS lets the user determine how to handle the error based on its description. When an error occurs, the change router holds the event until an error behavior is selected. The following are the error behaviors:

Skip: The change router tries to skip a the event which caused the router error. Retry: The change router to tries to process the event again, This option lets you execute the event again after manually fixing the error. Pause: This pauses the change router when it does not pull changes from the agent. Resume: This resumes the change router if it is paused.

CONTROL_TABLE
The CONTROL_TABLE is part of the staging area. This table lets you control the router by inserting control operations that are processed by the router. The router checks the table for new records. When a new record is entered the router reads it and then deletes the record. It then tries to execute the requested action if it is valid. For a description of the possible actions, see Determining the Change Router Error Behavior.

20-16 AIS User Guide and Reference

The CONTROL_TABLE has the following columns:


Table 208 Column Name action CONTROLTABLE Structure Data Type Description string (32) One of the following:

skip retry pause resume

For details on the above actions, see Determining the Change Router Error Behavior. node_id string (2) A two-digit identifier with the same value as the nodeID config property when in multi-router mode. In regular mode, this column has no value.

subscribeAgentLog Configuration Properties


The following shows what the entries for each column will be:

Parameter: subscribeAgentLog Type: Boolean Default: False Description: When set to true, the change router writes the contents of the CDC agents log into its own log. Do not set this property to true if the logLevel property is set to debug because the large amount of information that is sent in this mode will cause performance delays.

Performance Considerations
The SQL-based approach can reasonably scale in the following cases:

There are many captures tables but changes are processed in rounds with sufficient time apart. There is a relatively low number of captured tables but changes are processed in near-real-time manner.

This approach does not fit very well with many captured tables together with frequent change processing because that means constantly running SQL queries, consuming much resources whether or not there are any change records. The following sections provide further information about the various tuning parameters which may affect performance.

Memory Parameters
The maxTransactionMemory and maxStagingMemory parameters control the amount of memory used by the Change Router. As the change router processes transactions, it tries to keep them in memory in order to obtain the best performance. If a transaction is too large to be kept in memory (as determined in the maxTransactionMemory parameter), the transaction is reduced to a special disk file by the name <txn-id>.txn and from that point on, additional change events for this transaction are appended directly to this file.
SQL-Based CDC Methodologies 20-17

Since disk I/O is rather slow, it is recommended to set the maxTransactionMemory parameter to a high enough value that can be supported on the machine; The program must have sufficient system quota to allocate the memory and use it efficiently (without page-fault thrashing). To estimate the value for this parameter, you need to consider the biggest transaction likely to occur and calculate approximately the total size of all updated records (including some header overheads. For more information, see Capacity Planning). Very big transactions may occur as a result of a simple batch-update SQL query and although typical production applications do not invoke such transaction, they may still occur (for example, in month-end processing). The maxTransactionMemory parameter deals with a single transaction. Since there could be multiple transactions active concurrently, there is another parametermaxStagingMemory. This parameter determines how much memory all active transactions may consume together. This parameter value must be greater than the value of maxTransactionMemory, so that typical small transactions could still fit in memory while big transactions are reduced to disk files.

Latency
The Change Router is responsible for deleting a change record whose retention period had expired. When events are being deleted, the change router does not move data from the CDC agent to the change files. This may increase the latency of change event handling. Impact of deleting old events on latency can be reduced by setting the maxDeletedEventsInBatch parameter to a lower value. Another way to reduce the impact of deleting old events is by setting the retention period wisely. For example, if the change activity peaks between 08:00 and 16:00, then setting eventExpirationHours to 3 days plus 12 hours (that is 84 hours) would result in deletion activity matching the peak occurring between 20:00 and 04:00.

Capacity Planning
This section describes various planning considerations, and includes the following topics:

Storage Memory Network Processing

Storage
The main consideration for storage requirements is the amount of changes that needs to be cached in the Staging Area. The parameters are as follows:

D: How many days back does the staging area keeps records before deleting them (retention period). N: The number of captured tables. Ri: The size (in bytes) of a single record of the captured table (i=1, ..., N). Fi: The average number of changes per day for the captured table (i=1, ..., N).

20-18 AIS User Guide and Reference

H: The header size (in bytes). This is 225 bytes for the common header fields. It should be adjusted based on the developed solution. For example, some header fields may be omitted while some agent-specific header fields may be added. IH: The index overhead. Approximately 125 bytes.

The persistent storage requirement Sp (in bytes) can be calculated as follows:


Sp=D*SUM((Ri+H+IH)*Fi)

Where SUM has taken over i=1, ..., N The Change Router also uses temporary storage for processing transactions. Storage is used when an entire transaction does not fit in the memory allocated for the change router. The parameters are as follows:

CT: The number of concurrently running large transactions. MS: The maximum record size. MR: The maximum number of records in a large transaction.

The transient storage requirement St (in bytes) can be calculated as follows:


St=CT*(MS+H)*MR

Change records are kept in the Staging Area for a specified length of time, counted from the time the changes were added to the staging area. This behavior does not consider the original time the change occurred. For this reason, if CDC processing was halted for say, 4 days, then when processing resumes, 4 days worth of changes needs to be stored in the Change Table (Change File). This needs to be considered when allocating storage, since if the retention period is shorter, say 2 days, then recovering from a 4 days pause requires twice the normal storage. The minimal storage estimate S can be NC calculated as follows:
S=St+Sp

Memory
Memory requirement is made up of the following parts:

Memory usage of the event router: This value is mostly affected by the maxStagingMemory event router setting. The default value for this is 100,000Kb (=100MB) and it can be increased or reduced as needed. Using a higher number will increase the throughput if the total size of active transactions reaches that number (active transaction are the concurrently running transactions that have not yet committed or rolled back).

Memory usage of the SQL servers: Here the evaluation should be made experimentally by monitoring a running system and checking how much memory is added per additional server.

For capacity planning, the sum of these two parts should be used.

Network
Network capacity and latency is reflected in the throughput of the Change Router as well as in the throughput of the SQL consumers (if they are located on a remote machine). The parameters are as follows:
SQL-Based CDC Methodologies 20-19

AR: The average record size, in bytes. RF: The desired number of change events per second.

A very rough estimate of network capacity NC can be calculated with the following formula:
NC=AR*RF*8*(0.5+1) (Bits per second)

Another related setting is the maximum number of events to request in a single round-trip to the CDC agent. The parameter is the config/sourceEventQueue/maxEventsAsBlock change router configuration parameter. It affects the request sizes and requires updating the environment/comm/comMaxXmlSize and the environment/comm/comMaxXmlInMemory parameters. The setting of these XML parameters can be calculated as follows:

MEAB: The maximum events as block (as defined in the configuration. The default is to request 250 events). MAXR: The maximum size of a single XML change record (including the header part). To get a rough estimate on the maximum size, one can take the original change record data+the header size and double it (taking into account the binary XML overhead in the extreme case).

comMaxXmlInMemory=comMaxXmlSize=MEAB*MAXR*1.2

Where 1.2 is a 20% cushion factor, for safety.

Processing
Processing power is not usually the limiting factor in the work of the Change Router and the Staging Area. Actual CPU consumption depends on the rate of the change events processed as well as on concurrent ETL clients reading the Change Table (Change File). It is recommended to monitor CPU consumption under typical load to verify that the system functions within its optimal load.

Applying Metadata Changes


CDC projects and more generally, ETL projects, are rather sensitive to metadata changes. There are two major cases where we need to apply metadata changes:

At design-time, while building a solution, metadata may change either because of mapping changes or because of changes in the applications that work with the data. This use case has the following characteristics: Design-time change may occur frequently (less so towards the end of the development cycle). Changes may be small (such as adding a new column to a table). Change data can typically be deleted without any implications.

At production, replacing one production version with another (typically a new production version of the entire customer project).

20-20 AIS User Guide and Reference

This use case has the following characteristics: Production changes are relatively rare. Production changes typically contain changes to many components because production is typically held to apply changes. The preference is to wait as possible and consolidate changes into a single upgrade process. Production change data needs to be handled, and data loss is not an option.

Since applying metadata changes in production requires one to first applying these changes in the design environment, we will introduce the design changes first and then add the production requirements.

Design-time Metadata Change Procedure


The procedure for changing metadata is outlined as follows:
1. 2. 3.

Open the relevant CDC solution in Attunity Studio. Disable the solution. Access Solution, Implementation, and then Metadata to add or remove a captured table. To apply a metadata change in a captured table, remove and immediately add back the updated table name from the list. Delete the physical DISAM change files associated with the updated or deleted captured tables. Delete the SERVICE_CONTEXT file so that changes start flowing immediately. Re-enable the solution.

4. 5. 6.

Production Metadata Change Procedure


Here we assume that the solution have been updated for metadata changes in the development environment. The remaining procedure for changing metadata in the production environment is as follows:
1. 2. 3. 4. 5. 6. 7.

Perform a backup. Make the application go quiet so that no new events are added. Run the ETL processes until all changes were processed off all Change Table (Change File). Disable the solution workspaces. Delete the physical DISAM change files associated with the updated or deleted captured tables. Deploy the updated solution to the production environment. Re-enable the solution.

Migration from XML-based CDC


Attunity Stream implements SQL-based CDCs with the goal of providing improved support for classical SQL-based ETL tools. This section summarizes the features that are used in a solution to adapt to the new SQL-based CDC implementation.

SQL-Based CDC Methodologies

20-21

Support for the XML mode of operation is also supported for easy use with XML-based integration. The following table summarizes some features that enhance the SQL-based CDC abilities:
Table 209 Topic Attunity Stream SQL-Based CDC Features Regular Behavior Enhanced SQL Behavior

Change Table (Change File) Each captured table had a change table The same change tables are available, associated with it. The table column with the following changes: included the original captured table Two new indexes for direct access. columns plus CDC header fields. A few more CDC header fields. The change tables are now real tables based on DISAM files on disk; one file per captured table. The XML to binary conversion is done once so the DISAM files store data in an efficient binary format. Stream position management and guaranteed delivery Stream position was maintained at the agent or the Staging Area. Stream position was automatically advanced on commit or on read when auto-commit mode was used. This approach did not offer guaranteed change delivery because one might have read a change event and failed to process it (or might have failed to get it due to network failure). Friendly stream position The stream position was an opaque entity with the only quality of being strictly incremental (that is, each new change record got a higher stream position, alphabetically). Stream position is no longer maintained by the agent or the staging area. Instead, an easy to use stream position is exposed as an index column of a change table (the context column).

the stream position looks like a timestamp; it actually is the timestamp of when the event was written out to the change file (in universal coordinated time, not local time, to avoid daylight saving time conflicts). Since the ETL tool had to maintain stream position anyway for guaranteed change delivery, there is no extra work needed in the new approach.

Guaranteed change delivery To guarantee change delivery, the ETL tool needed to maintain a stream position and to reestablish it when retrieving using the SetStreamPosition stored procedure. Referential Integrity

To attain referential integrity at the end An alternative, SQL based approach of a mapping session, one had to use was developed (see Referential the SetSyncPoint procedure. Integrity Considerations). Change events in their original XML form, were stored in a one big index file (about 5 indices) whether they were committed or not. They were later extracted by a non-primary index scan (to achieve transaction order). Change data was stored, in most cases, as a CLOB attached to a change entry. Very little use was made of available system memory. Memory is used intensively to improve performance. Change records are transformed into their binary representation (just like one a COBOL copybook gives) and are kept in memory until commit time when they are written to different change files, one per captured table. Rolled-back events simply vanish without incurring any I/O overheads.

Performance

20-22 AIS User Guide and Reference

Table 209 (Cont.) Attunity Stream SQL-Based CDC Features Topic Scalability and robustness Regular Behavior A single server process (with a single thread) did all the reading from the CDC agent and the servicing of SQL to consumers. This resulted in poor scalability of the event consumption and the applied process. With more than a few consumers, time-outs would be quite likely. Enhanced SQL Behavior A single dedicated server process reads change records off the CDC agent and distributes the (after transactions are committed) to separate change files. A separate workspace with any number of servers is defined for consuming change records using a normal DISAM data source. This results in very good scalability and use of system resources.

Another disadvantage of the single server process is that a communication Having separated the Change Router hang (mostly due to network issues) and the change file access servers also could bring the entire solution to a means that problems communicating stand still. with the CDC agent no longer impact the reading process. One can even take down the agent and the change router server for updates, leaving the Change Table (Change File) accessible. Monitoring The monitoring system had a significant impact on the system in terms of resource utilization (and the more change events, bigger was the impact). Very little could be told about what the system is doing at any given moment or over time.

While the existing monitoring system is still there, now a special SERVICE_ CONTEXT table is provided in the same data source as the change file. Querying this table provides insight into what the change router is doing, as well as important counters. Having this information exposed simply via SQL makes it easy to integrate 3rd party management solutions. The service context does not introduce significant overhead and it can be sampled as needed to produce running statistics. Timed-out transactions are detected and written out to special .txt files. A special SERVICE_CONTEXT file counts timed-out transactions, enabling to easily detect timed-out transactions.

Transaction time-outs

A timed-out transaction was not detectable.

Migrating an existing solution to V4.8 involves the following steps:


1. 2. 3. 4.

Recreate the solution in Attunity Studio V4.8. Establish new capacity planning as explained above. Eliminate calls to the GetStreamPosition procedure. Instead, retrieve the last known handled position from the STREAM_POSITION table. Eliminate calls to the SetStreamPosition procedure. Instead, use the SQL WHERE clause with the context field greater than the last known handled position from the STREAM_POSITIONS table. Establish and use the STREAM_POSITIONS and SYNC_POINTS as explained in this document.

5.

These steps are general guidelines. Actual changes depend on the kind of solution implemented with Attunity Stream V4.6.

SQL-Based CDC Methodologies

20-23

20-24 AIS User Guide and Reference

21
Creating a CDC with the Solution Perspective
The Solution Perspective lets you create a Change Data Capture (CDC) operation using a series of interactive guides. This section contains the following topics:

Using the Solution Perspective Getting Started Guide Project Guide Troubleshooting

Using the Solution Perspective


The Solution perspective is a series of wizard steps that guide you through the procedures needed to configure all components in a CDC solution. You use the Solution perspective to set up projects. A project contains all of the components necessary for a change data capture and the specific configuration parameters that you set. For example, you can set up a project called Adabas. This project might consume data from a specific Adabas data source and send it to your data warehouse. You may also need to consume changes from the same data source, but send the data to a different place. Using the Solution perspective, you can create two separate projects that will save the parameters and then easily use each solution where needed without configuring the system each time. To open the Solution Perspective Do one of the following:

From the menu bar, click Window| Open Perspective|Solution. Click the Perspective button on the perspective toolbar and select Solution from the list. The Solution perspective opens with the Getting Started view available on the left of the workbench.

For more information on perspectives, see in Working with Perspectives.

Using Views in the Solution Perspective


The views on the left side of the Solution perspective contain a series of links that guide you through the creation of a project. In the Solution perspective, these views are called guides. Double-click these links to open various editors that let you

Creating a CDC with the Solution Perspective

21-1

configure the parts of the project or solution you are creating. In most cases you can click on each link in the order they appear. The solution perspective guides display the following symbols in front of a link to show you what tasks should be done, and what tasks were completed.

Triangle: This indicates that there are subtasks associated with this link. When you click the link, the list expands to display the subtasks. Asterisk (*): This indicates that you should click that link and carry out the tasks and any subtasks presented. If more than one link has an asterisk, you can carry out the marked tasks in any order. Check mark (): This indicates that the tasks for this link and any sublink are complete. You can double click the link to edit the configuration at any time. Exclamation mark (!): This indicates a potential validation error.

Getting Started Guide


The Getting Started view looks like the figure below.
Figure 211 Getting Started

This view has two sections. The upper section has links to open or create an AIS project or to open an existing project to edit or to finish configuring the project. These options are:

Creating a New Project Opening an Existing Project Opening Recent Projects

The section below the line has the link, Opening Recent Projects. This lets you access existing projects quickly.

Creating a New Project


The first step is to create a new project. The project holds all the information necessary for implementing the specific CDC solution you want to execute.

21-2 AIS User Guide and Reference

To create a new project 1. In the Getting Started Guide guide, click the Create new project link. The Create new project screen opens.
2.

In the Project name field, enter a name for your project. The types of projects available are listed in the left pane just below.

3.

Select Change Data Capture. The CDC options available are presented in the right pane.

Figure 212 Create New Project

4.

Select a CDC type from the right pane. Your options are:

ADD-ADABAS (Mainframe): Capture data changes from ADD-ADABAS Mainframe database, with or without the use of a staging area, from any type of client. ADD-ADABAS (Unix): Capture data changes from ADD-ADABAS Unix database, with or without the use of a staging area, from any type of client. ADD-ADABAS (OpenVMS): Capture data changes from ADD-ADABAS OpenVMS database, with or without the use of a staging area, from any type of client. ADABAS Mainframe: Capture data changes from ADABAS Mainframe database, with or without the use of a staging area, from any type of client. ADABAS Unix: Capture data changes from ADABAS Unix database, with or without the use of a staging area, from any type of client.

Creating a CDC with the Solution Perspective

21-3

ADABAS OpenVMS: Capture data changes from ADABAS OpenVMS database, with or without the use of a staging area, from any type of client. DB2 (Mainframe): Capture data changes from DB2 Mainframe database, with or without the use of a staging area, from any type of client. DB400: Capture data changes from DB400 database, with or without the use of a staging area, from any type of client. DISAM: Capture data changes from DISAM database, with or without the use of a staging area, from any type of client. Enscribe: Capture data changes from Enscribe database, with or without the use of a staging area, from any type of client. IMS-DBCTL: Capture data changes from IMS-DBCTL database, with or without the use of a staging area, from any type of client. IMS-DBDC: Capture data changes from IMS-DBDC database, with or without the use of a staging area, from any type of client. IMS-DLI: Capture data changes from IMS-DLI database, with or without the use of a staging area, from any type of client. MS SQL Server: Capture data changes from MS SQL Server database, with the use of a staging area, from any type of client. Oracle: Capture data changes from an Oracle database, with or without the use of a staging area, from any type of client. SQL/MP: Capture data changes from SQL/MP database, with or without the use of a staging area, from any type of client. VSAM-Batch: Capture data changes from VSAM-Batch database, with or without the use of a staging area, from any type of client. VSAMCICS: Capture data changes from VSAM-CICS database, with or without the use of a staging area, from any type of client. VSAM for OpenVMS: Capture data changes from VSAM database running on the OpenVMS platform, with or without the use of a staging area, from any type of client.

5.

Click Finish. The Project Guide is displayed.

Opening an Existing Project


To open an existing project 1. In the Getting Started guide, click the Open existing project link. The Open existing project screen is displayed.
2. 3.

From the Project drop-down list, select the required project. Click Finish. The Project Guide with the information for that project is displayed.

21-4 AIS User Guide and Reference

Opening Recent Projects


To open an existing project In the Getting Started Guide guide, click the link to the recent project of your choice. The project opens and the Project Guide with the information for that project is displayed.

Project Guide
The Project Guide has three parts:

Design Wizard Implementation Guide Deployment Guide

Figure 213 Project Guide

Design Wizard
The Design Wizard lets you review and modify all the basic settings of your project. With CDC Solutions, you have the flexibility of customizing your project map according to different configurations based on parameters that are entered in the wizards screens.

Creating a CDC with the Solution Perspective

21-5

To configure basic Solution settings 1. Click the Design link. The Design Wizard opens. Use this wizard to enter the basic settings for your project.
Note:

The wizard screens are divided into sections. Some sections provide information only and other sections let you to enter information about the project. If you do not see any information or fields for entering information, click the triangle next to the section name to expand the section.

Figure 214 Design Wizard (Design Options)

2. 3.

In the Data Source Name field, enter a name for your data source. In the Client Type combo box, select a client type. Your options are:

BizTalk Generic SQL: SQL clients ETL: All ETL tools that work using ODBC Generic XML: Client applications that consume changes through XML SQL Over Change Queue: SQL clients using the ADD-Queue mechanism Web Logic
Note:

This selection affects the number of machines actually employed and whether or not the use of stream service is enabled as a result.

21-6 AIS User Guide and Reference

4. 5.

To use the staging area, select Use staging area. For most CDC agents, this is the default and cannot be changed. Click Next. The Design Wizards second screen is displayed. In this step you configure the machines used in your solution. Enter the information for the following machines:

Server Machine: The machine where AIS is installed Client Machine: The local machine Staging Area Machine: The machine where the staging area is located. This is only available if you selected the staging area check box in the first screen.

Figure 215 Design Wizard (Configure Solution Machines)

6.

Enter or select a name from the drop-down list each of the above machines.
Note:

You can enter the same name for any of the machines. This allows you to use more or fewer machines for your Solution.

7. 8.

Select the platform that is running on each machine from the drop-down list. The platforms available depend on the original data source type. Click Finish. The wizard closes.
Note: Studio adds a check mark () next to the Design link to indicate that you completed this part of the configuration. However, you can click this link at any time to make changes.

Creating a CDC with the Solution Perspective

21-7

Implementation Guide
The Implementation Guide is done after you complete the Design Wizard. Note that the Implementation link has an asterisk (*) next to it to indicate that you need to enter configuration information in this section. To begin the Implementation Guide Click Implement. The Implementation Guide lets you customize the parameters of your CDC Solution machines, as they were defined in the Design Wizard. The tasks in the Implementation Guide are grouped under the configuration categories below, which can be expanded or collapsed with a mouse click. The order and grouping in which they are presented may change according to Design Wizard settings and/or data source type. For example, when selecting an Oracle CDC project, after setting the configurations in the Design Wizard, the Implementation Guide is displayed as follows:
Figure 216 Implementation Guide

The actual tasks performed in the Implementation Guide vary according to data source and number of machines employed. You can set up the implementation in any order. Click on any available link with an asterisk (*). Below is a complete list of tasks.

Machine Data Source Metadata

21-8 AIS User Guide and Reference

CDC Service Access Service Manager Stream Service

Machine
The Machine link lets you set the IP address/host name and the port of the CDC or server machine. To configure server IP address/host name and port 1. Click the Machine link. The machine definition screen is displayed:
Figure 217 Machine Definition

2.

In the IP address/host name field, do one of the following:


Enter the server machines numeric IP address. Click the Browse button and select the host machine from the ones presented, then click Finish.

Creating a CDC with the Solution Perspective

21-9

Figure 218 Select Machine

Note:

The machine you enter must be compatible with the platform designated in the Design Wizard (Configure Solution Machines) screen.

3.

Enter the port number. The default port number is 2551.

4. 5.

If you want to connect with user authentication, enter a user name and password, with confirmation, in the Authentication Information area. Select the Connect via NAT with a fixed IP address checkbox if you are using Network Access Translation and want to always used a fixed IP address for this machine. For more information, see Firewall Support.

6.

Click OK.

Data Source
The Data Source link lets you define your data source. The information you enter depends on the type of CDC agent you are setting up. It can be a file name, or a path with a file name, or a full connect string, depending on the data source type. To enter the data source Enter the data source information requested, and click Next. For details on the format to be entered, see the chapter for the CDC agent you are using. For a list of agents, see the CDC Agents Reference.

Metadata
The Metadata link lets you define the metadata used for the data sources that are file systems or non-relational databases. You must first select a metadata source. After you select the source, then you must carry out one or more operations described below.
Note:

The Select Metadata Source link has an asterisk (*) next to it to indicate that you must carry out this operation first.

21-10 AIS User Guide and Reference

To select the metadata source 1. Click the Metadata link. The Create metadata definitions view is displayed.
2. 3.

Click the Select Metadata Source link. Select one of the following:

Create new metadata definitions Copy from existing metadata Import metadata from external source

4.

Click Finish. The screen closes. Click an available link (with an asterisk (*) next to it):

To create new metadata definitions To copy definitions from existing metadata To import metadata from an external source

To create new metadata definitions 1. Click the Customize Metadata link. The customize metadata screen is displayed.
Figure 219 Customize Metadata

2. 3.

Right-click in the fields under Customize Metadata, and select Add. Enter the table name in the field presented, and click OK.
Note: You may have validation errors in the tables created, which you can correct by the end of the procedure.

Creating a CDC with the Solution Perspective

21-11

4.

Right-click the table created and select Field Manipulation. The Field Manipulation screen is displayed.

Figure 2110

Field Manipulation

5. 6. 7.

Right-click in the upper pane and select Field|Add|Field. Enter the name of the field in the screen provided, and click OK. Default values are entered for the table. To manipulate table information or the fields in the table, right-click the table and choose the option you want. The following options are available:

Add table: Add a table. Field manipulation: Access the field manipulation window to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. XSL manipulation: You specify an XSL transformation or JDOM document that is used to transform the table definition.

The Validation tab in the bottom half of the window displays information about what you must do to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location).
8. 9.

Correct any remaining validation errors. Click Finish to generate the metadata.

21-12 AIS User Guide and Reference

To copy definitions from existing metadata 1. Select Copy from existing metadata, and click OK.
2.

Click the Copy Existing Metadata Source link. The Copy Existing Metadata Source screen is displayed showing your local machine and with metadata compatible with the data source selected.

Figure 2111

Copy Existing Metadata Source

Selected tables Tables

Arrow keys Left pane Right pane

3. 4. 5.

From the sources in the left pane, expand the list until you see the tables from which you want to copy metadata. Using the arrow keys, bring the required tables into the right pane. Once you have selected all the desired tables, click Finish.

To import metadata from an external source


Note:

When you import metadata from an external source, the data source type has an impact on which metadata files you can import. IMS imports Cobol, DBD and PSB files. All other ISAM CDC projects import Cobol files for metadata.

1. 2.

Select Import metadata from external source, and click OK. Click the Import Metadata Source link. The New Import screen is displayed.

Creating a CDC with the Solution Perspective

21-13

Figure 2112

New Import

3. 4.

In the Import name field, you must enter a name for the import operation. Select an import type from the options available in the drop-down list. The items in the drop-down list will be different according to data source type selected. See the Data Source Reference for a list of available data sources.

5.

Click Finish.

To import metadata from COBOL files 1. Click the Add button to select COBOL copybook files.
2.

The Add Resource window opens, which lets you select files from the local machine or FTP the files from another machine.
Add Resource Screen

Figure 2113

3. 4.

If the files are on another machine, right-click My FTP Sites and choose Add. Set the FTP data connection by entering the server name where the DBD files reside and enter a valid username and password to access the machine, unless using anonymous access.

21-14 AIS User Guide and Reference

After you access the machine, you can browse and transfer files required to generate the metadata. You access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.
5. 6.

Select the files to import and click Finish to start the transfer. Repeat the procedure for the COBOL copybooks.
Note:

You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns with a COBOL copybook that ignores the first six columns. In this case, repeat the import.

Or, for IMS data sources, click Add in the import wizard to add a PSB file.
7.

The selected files are displayed in the wizard.


Import Tables

Figure 2114

8. 9.

Click Next. Apply filters to the copybooks, if necessary.

Creating a CDC with the Solution Perspective

21-15

Figure 2115

Apply Filters to Copybooks

The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP is for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested columns: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Indicates whether to consider case sensitivity. Find: Searches for the specified value. Replace with: Replaces the value specified for Find with the value specified here.

The following DBD filters are available: (relevant for IMS)


Ignore after column 72: Ignore columns 73 to 80 in the DBD files. Ignore first 6 columns: Ignore the first six columns in the DBD files. Ignore labels: Ignore labels in the DBD files.

The following PSB filters are available (relevant for IMS):

Ignore after column 72: Ignore columns 73 to 80 in the PSB files.

21-16 AIS User Guide and Reference

Ignore first 6 columns: Ignore the first six columns in the PSB files.

10. Click Next.

The Select Tables screen opens.


Figure 2116 Select Tables Screen

The import manager identifies the names of the segments in the DBD files that will be imported as tables.
11. Select the tables that you want to access (and receive the Attunity metadata) and

then click Next. The Import Manipulation window is displayed. In this window, do the following:

Resolve table names, where tables with the same name are generated from different COBOL copybooks specified during the import. Specify the physical location for the data. Specify table attributes. Manipulate the fields generated from the COBOL, as follows: * * * * * * * * Merging sequential fields into one for simple fields. Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting a field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the counter for the array from a list of potential fields.

Creating a CDC with the Solution Perspective

21-17

Setting columnwise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.
Import Manipulation Screen

Figure 2117

Note:

The Validation tab in the bottom half of the window displays potential validation problems in the current metadata definition.

12. To manipulate table information or the fields in the table, right-click the table and

select one of the following:


Add table: Add a table. Field manipulation: Access the Fields manipulation window to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set the table attributes. XSL manipulation: Specify an XSL transformation or JDOM document that is used to transform the table definition.

13. Click Next to generate the metadata. The final window lets you import the

metadata to the machine where the data source is located or leave the generated metadata on the Attunity Studio machine, to be imported later.
14. Specify that you want to transfer the metadata to the machine where the data

source is located and click Finish. The metadata is imported to the machine where the data source is located.

21-18 AIS User Guide and Reference

CDC Service
The CDC Service wizard depends on the CDC agent you are working with. This section describes the two standard screens in the wizard. For additional information required, see the chapter for the specific agent you are working with. For a list of agents, see CDC Agents Reference. To configure the CDC Service 1. Click the CDC Service link. The CDC Define change capture starting point dialog box opens.
2.

In this dialog box, select one of the following to determine the Change Capture starting point:

All changes recorded in the journal. On first access to the CDC agent (immediately when a staging area is used, otherwise when a client first requests changes). Changes recorded in journal after a specific time.

If you choose the third option, you need to select the specific time by clicking the Set time button and selecting the time from the calendar that appears.
Figure 2118 Define Time Stamp and Set Specific Time

3. 4.

If you want, select include a capture of before-image records. Click Next to open the CDC agents configuration screen. In this screen you configure the CDC Service properties for the agent you are using. See CDC Agents Reference.

Creating a CDC with the Solution Perspective

21-19

5.

Click Next, to open the CDC Service Logging screen.


CDC Service Logging

Figure 2119

6.

From the combo box, select a logging level:


None internalCalls Info Debug API

7.

Click Finish.

Access Service Manager


This wizard helps you set up a daemon workspace to optimally handle client needs. To configure the Access Service Manager 1. Click the Access Service Manager link. The Setup Workspace wizard opens.

21-20 AIS User Guide and Reference

Figure 2120

Select Scenario

2.

Select the scenario that best meets your site requirements:


Application Server using connection pooling Stand-alone applications that connect and disconnect frequently Applications that require long connections, such as reporting programs and bulk extractors

3.

Click Next. The next screen that appears depends on the type of connection you just selected.

4.

Enter the relevant connection information and click Next.

Creating a CDC with the Solution Perspective

21-21

Figure 2121

Connection Waiting Time

5. 6. 7.

In the first field, enter the amount of time you want to wait for a new connection to be established (in seconds). In the second field, assuming a fast connection, the amount of time to wait for a response (in seconds) Click Next.
Site Security

Figure 2122

21-22 AIS User Guide and Reference

8. 9.

Enter the operating system account (user name) used to start server instances. Select allow anonymous users to connect via this workspace, if you want to allow this option. workspace, or select the users/groups that you want to have exclusive access.

10. Enter the permissions for the workspace. You can allow all users to access the 11. Select access server instances via specific ports, if you want to allow this options.

If this option is cleared, the defaults are used. If you select this option, indicate the from port and to port and make sure that you reserve these ports in the TCP/IP system settings.
12. Click Next.

The summary screen opens.


Figure 2123 Workspace Setup Summary

13. Click Save icon and then click Finish.

Stream Service
The Stream Service configures the following:

Staging area Filtering of changed columns Auditing information


Note:

Null filtering is currently unsupported. Filtering empty values is supported. Space values are truncated and are handled as empty values.

Creating a CDC with the Solution Perspective

21-23

To configure the Stream Service 1. Click the Stream Service link. The Stream Service wizard opens.
Figure 2124 Staging Area

Note:

This screen will only appear if you selected the inclusion of a staging area in your solution.

2. 3.

Select Eliminate uncommitted changes to eliminate uncommitted changes from your CDC project. Select the use secured connection checkbox to configure the staging area to have a secured connection to the server. This is available only if you logged into the server using user name and password authentication. Set the event expiration time in hours. Under File Locations, click the Browse buttons to select the location of the changed files, and temporary staging files, if necessary. Click Next. You are prompted to select the tables to participate in the filtering process.

4. 5. 6.

21-24 AIS User Guide and Reference

Figure 2125

Select Tables

Tables

Arrow keys

Left pane

Right pane

7. 8.

Click the required tables in the left pane and move them to the right pane using the arrow keys. Click Next. You are prompted to select the relevant columns within the tables selected above from which to receive changes.

Creating a CDC with the Solution Perspective

21-25

Figure 2126

Table and Column Selection

Column checkboxes

Table checkboxes

9.

Select the columns from which to receive changes.


Note:

Table headers will appear grouped together in a separate table at the beginning of the list. You can also request the receipt of changes in the headers columns.

Any data changes in the columns selected will be recorded.


10. Click Next.

You are prompted to indicate the types of changes you want to receive in the tables and which columns to display.

21-26 AIS User Guide and Reference

Figure 2127

Filter Selection

Actions

Changes within columns

11. Select the actions from which you want to receive change information. Your

options are:

Update Insert Delete


Note:

These items are all selected by default.

12. Under the Changed Columns Filter column, select the columns for which you

want to receive notification of changes.


Notes:

If you do not select them, you will receive notification of all changes. If you select only one, you will receive change information only if the field selected undergoes a change. If you select more than one, but not all, then you will receive change information only if any or all of the selected fields undergo a change

Creating a CDC with the Solution Perspective

21-27

13. To filter content from within a given column, under the Content Filter column,

double-click the relevant column, and then once again on the ellipsis button that appears. The Content Filter screen is displayed.
Figure 2128 Content Filter

Filter type

14. Select a filter type:

Select In for events to be returned where the relevant column value equals the values you specify (if a column is NULL, it is not captured). Select Not In for events to be returned where the column value is not in the values you specify (if the column is NULL, it is captured). Select Between for when the column value is between the two values you specify (if a column is NULL, it is not captured).

15. Click Add in the lower-left corner of the Content Filter screen.

Note:

If you select more than one condition, you will receive the change information as long as one of the conditions is true.

16. Depending on your selection, do one of the following:


If you selected In/Not In, continue with step17. If you selected Between, continue with step 20.

17. Click Add in the Add items to the list screen.

21-28 AIS User Guide and Reference

Figure 2129

Add Items (In or Not In)

You are prompted to enter a value.


Figure 2130 Enter a Value

18. Enter a value for events to be returned where the relevant column value appears

(or does not appear) in that value. To filter empty values () for the Not In filter type, leave this filed blank.
19. Repeat steps 17 and 18 as many times as necessary, and then proceed to step 22. 20. Click Add in the Add items to list screen.

The Add between values screen is displayed.


Figure 2131 Add Items (Between)

21. Enter values for events to be returned where the column value is between the two

values you specify.


22. In the Content Filter screen, click Next.

You are prompted to select the level of auditing when receiving changes.

Creating a CDC with the Solution Perspective

21-29

Figure 2132

Audit Level

23. Select the required level of auditing when receiving changes. Your options are:

None: For no changes. Summary: For an audit that includes the total number of recorded delivered, as well as system and error messages. Headers: For an audit that includes the total number of records delivered, system and error messages, and the record headers for each captured record. Detailed: For an audit that includes the total number of records delivered, system and error messages, the record headers for each captured record, and the content of the records.

24. Click Finish.

When you complete all the Implementation operations, a check mark () is displayed next to every link. Click Done to return to the Project Guide so you can begin the Deployment activities

Deployment Guide
After you complete the design and implementation guides, you are ready to deploy your project. The Deployment Guide has two sections:

Deployment Procedure: This section is used to deploy the project. Control: This section is used to activate or deactivate workspaces after the project is deployed and you are ready to consume changes. In this section, you can

21-30 AIS User Guide and Reference

deactivate the workspace anytime you want to suspend consumption of changes from the staging area.
Figure 2133 Ready to Deploy

To deploy the project 1. Click the Deploy link. The Deployment Procedure and Control open.

Creating a CDC with the Solution Perspective

21-31

Figure 2134

Deployment Procedure and Control

2.

Click the Deploy link. Studio processes the naming information. This may take a few minutes. If there are naming collisions, you are prompted to allow Studio to resolve them.

Figure 2135

Resolve Naming Collision

3.

Click Yes to resolve any naming collisions. The Deployment Guide is displayed.

21-32 AIS User Guide and Reference

Figure 2136

Deployment Guide

4.

If you are ready to deploy, click Finish. Otherwise, click Cancel and you can return to the Design Wizard and the Implementation Guide to modify the solution. If this project was deployed previously, you will be notified that re-deployment will override the previous instance.

Figure 2137

Previously Deployed Project

Notes:

When you redeploy a project where the metadata is changed, the Staging Area (SA) tables should be deleted so that no incorrect information is reported. When you redeploy a solution, the a new binding is created for the solution. The new binding is created with the default parameters only. Any temporary features that were added are lost.

5. 6.

Where applicable, click OK to redeploy. Click the Deployment Summary link.

Creating a CDC with the Solution Perspective

21-33

The Deployment Summary is shown. It includes the ODBC Connection String Parameters and JDBC Connection String, as well as specific logger scripts to enable CDC capturing.
Figure 2138 Deployment Summary

CDC summary

Connection strings

Scripts

7. 8.

Cut and paste any information required from the Deployment Summary screen to your environment as necessary. If there is nothing wrong with your deployment results, click Finish. If you found problems, click Cancel and to return to the Design Wizard and the Implementation Guide to modify the solution. If you are redeploying a solution you must follow these directions to make sure that the context and agent_context fields of the SERVICE_CONTEXT table should be saved. Follow these directions to save the fields:
Note:
1. 2. 3. 4. 5. 6.

In the staging area data source run: select context, agent_context from SERVICE_CONTEXT; and save the returned values. Delete the SERVICE_CONTEXT table physical files. Redeploy the solution. Activate the router to create the SERVICE_CONTEXT table. Disable the router. In the staging area datasource run: insert into SERVICE_CONTEXT (context, agent_context) values('XXX', 'YYY'). This will Insert the saved values to the SERVICE_CONTEXT table. Activate the solution.

7.

21-34 AIS User Guide and Reference

To activate and deactivate workspaces To activate workspaces, click the Activate Workspaces link.

To deactivate workspaces, click the Deactivate Workspaces link. Over the course of the activation/deactivation, you may receive messages indicating that the daemon settings on one or more of the machines involved in your solution have changed. Click Yes to proceed.
Daemon Configuration Change

Figure 2139

Troubleshooting
When you deploy your project, any problems and/or messages appear in their respective views.
Figure 2140 Deployment Message View

To resolve deployment problems 1. Right-click the problem or message and select Go To. The point in the previous guides where the problem occurred is displayed.
2. 3.

Check the settings and make any changes required. Follow the procedure To deploy the project.

Creating a CDC with the Solution Perspective

21-35

21-36 AIS User Guide and Reference

Part IV
Attunity Federate
This chapter contains the following topics:

What is Attunity Federate Using a Virtual Database Segmented Data Sources

22
What is Attunity Federate
This chapter has the following sections:

Overview The Data Engine Base Services

Overview
Attunity Federate provides Enterprise Information Integration (EII) across heterogeneous data sources. Using Attunity Federate, you can create single views of business information, making it easier for business users to access information in multiple data silos with virtual data models. You can use Attunity Federate