Académique Documents
Professionnel Documents
Culture Documents
Connector
User Guide
Note
Before using this information and the product it supports, read the information in
“Notices” on page 64.
This edition applies to version 3, release 4 of IBM® TRIRIGA® Application Platform and to all subsequent releases and
modifications until otherwise indicated in new editions.
© Copyright International Business Machines Corporation 2011, 2014. All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Intended Audience............................................................................................................................................................. 1
Prerequisites....................................................................................................................................................................... 1
Support............................................................................................................................................................................... 1
Schemes ............................................................................................................................................................................ 21
Database Scheme........................................................................................................................................................... 21
File Scheme ................................................................................................................................................................... 24
File to DC Scheme......................................................................................................................................................... 25
Http Post Scheme .......................................................................................................................................................... 29
Additional Resources....................................................................................................................................................... 49
Upgrading the TRIRIGA Integration Object................................................................................................................. 49
Object Glossary ............................................................................................................................................................. 50
Standard Workflows...................................................................................................................................................... 51
Standard Queries ........................................................................................................................................................... 53
Standard Lists ................................................................................................................................................................ 54
Determining the Integration Version ............................................................................................................................. 54
User Guides ................................................................................................................................................................... 54
INDEX................................................................................................................................ 63
NOTICES........................................................................................................................... 64
Privacy Policy Considerations........................................................................................................................................ 66
Trademarks...................................................................................................................................................................... 66
About This Guide
This user guide describes the procedures for implementing IBM TRIRIGA connector
products.
Conventions
This document uses the following conventions to ensure that it is as easy to read and
understand as possible:
Note – A Note provides important information that you should know in addition to the
standard details. Often, notes are used to make you aware of the results of actions.
Tip – A Tip adds insightful information that may help you use the system better.
Intended Audience
This document is intended for users who are implementing one of the IBM TRIRIGA
connector products.
Prerequisites
This guide assumes that the reader has a basic understanding of the IBM TRIRIGA
Application Platform and the fundamental concepts required to operate the web-
based IBM TRIRIGA system.
Support
IBM Software Support provides assistance with product defects, answering FAQs, and
performing rediscovery. View the IBM Software Support site at
www.ibm.com/support.
Tools and data that enable the GIS feature come from both IBM TRIRIGA and Esri. The
breakdown is as follows:
An initial set of building data used to query the Esri ArcGIS server.
IBM TRIRIGA
A ClassLoader object that contains the logic to render the Esri
JavaScript viewer. For more information about class loaders, see
Custom ClassLoader in Appendix A.
Tools to configure the basemaps, layers, spatial references, widgets,
and queries used in the rendered GIS Map areas within IBM TRIRIGA.
The geographical and geospatial data (Data Services). The IBM TRIRIGA
Esri
system obtains this data through REST API services on Esri servers.
The actual map view. The ArcGIS server, whether the services offered
online or the services hosted on a proprietary server, renders the maps
and handles your geoprocessing.
Geoprocessing for drive time or distance radii. Geoprocessing is
provided by the ArcGIS server.
Geocoding for gathering the latitude and longitude coordinates of
addressable objects or identifiers that represent the features. This is
provided by the ArcGIS server.
The Esri JavaScript API. It renders the viewer to provide data from the
server and to provide basic interaction with the map data. The standard
configuration is defined in the IBM TRIRIGA GIS Map object.
A GIS map can be displayed in two areas in IBM TRIRIGA applications: as a portal
section or in a tab. The same full-featured functionality is available each time a map
is displayed.
The versions of the IBM TRIRIGA Application Platform and the IBM TRIRIGA
applications that support the GIS features described in this chapter are defined in the
IBM TRIRIGA Application Platform Compatibility Matrix, which can be found at the
following link:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wik
i/IBM+TRIRIGA1/page/Support+Matrix.
The URL section names of the internal web pages are as follows:
triURL - GIS - Environmental Manager/Planner (US Govt)
triURL - GIS - Environmental Manager/Planner
When a user signs in to the application, the URL is called to load the web page that
contains the map. Based on the map defined in the URL parameter, a list of
executable reports is returned. The first report in that list runs, and the results are
returned to the map.
Reports can be metric queries or standard queries. The reports define what a user
sees in the bubble markers for the locations or features. These queries also populate
the table of data in the viewer. There is a one-to-one correlation between the bubble
markers on the map and the data in the table. The table contains the same fields that
the bubble marker contains, because the source data for both is the same report.
With map widgets, the user can pan or zoom or find locations. The user can also
create and edit features on the map and assign those features to an IBM TRIRIGA
object.
The pinpoints on the map show the locations that are returned from the query. Each
location has a bubble marker (hover text) that displays information related to the
location. If the query is a metric query, the bubble markers display the metric results.
And the colors of the location pinpoint icons match the thresholds represented by the
The query that is associated to the map in a portal section does not affect or react to
any other data on the portal. GIS is a stand-alone application inside of a portal
section.
A GIS section includes a Save Preferences button. This feature saves the current map
extent and view. The next time the user signs in, the saved settings override the
default in the URL parameter of the portal section URL. If the user did not save
preferences, the section shows the default view as defined in the URL parameter
string. The user preferences are stored per user, per map. When the user changes to a
different map, the preferences for the original map are not applied to the second
map. Instead the second map is displayed with the default settings for that map,
unless the user previously saved preferences for the second map.
Step 1 Create a new GIS Map object. Define the initial extents, basemaps, layers, and
queries to be applied to the map when it renders in the portal section. For this
example, the map is named My First Map.
Step 2 Create a new portal section of type External. Enter a URL as shown in the following
example: /html/en/default/rest/EsriJS?map=My First Map. The map
parameter value of My First Map tells the viewer to render that record.
GIS Tab
When a user has appropriate licenses, as-delivered Location, Geography, and RE
Transaction Plan records include a GIS tab. This tab contains a map that pinpoints the
location of the record, and the map extent is the localized area. A default query
renders the map. The query is specified in the URL as a parameter string that
determines which queries to execute. The URL for the GIS tab is defined in the Form
Builder.
The following fields on each of the as-delivered Location forms and business objects
support this custom task:
Field Description
Step 1 Go to Tools > System Setup > GIS Map > Add.
The name you enter for the map must be different from the name of any other GIS
Map record in the system.
The Initial Extents section defines the initial extents and the default value for the
spatial reference, the well-known ID (WKID). These values establish the boundaries of
the map when it first opens in a portal section or tab. You must validate that the
WKID is appropriate for the basemaps and layers that you specify.
To save the existing extents and WKID set on the current map, go to the Preview tab.
Select the Show Details tab above the map and select the Set As Default hyperlink.
Selecting Set As Default stores the existing extents and WKID set on the current map
to the Initial Extents section of the General tab.
If your basemaps are not WKID 4326 or WKID 102100, you must specify a geometry
service to handle your point projection conversions. You specify a geometry service in
Any query that is supported by the IBM TRIRIGA Report Manager can be included in the
Query List section. The query with the lowest value in the Display Order field is
displayed by default. A user can select another query by clicking the Show Details tab
above the map, selecting the query from a drop-down list, and running that query.
Metric queries behave differently in GIS than they do in other areas of IBM TRIRIGA.
Metric queries for other areas of IBM TRIRIGA have a Show By/Group By drop-down
box and filter drop-down menus. In GIS, metric query data is filtered and grouped by
the geographical data that is displayed in the current map view. For the results of a
metric query to display correctly, the structure of the metric query must be tabular
and the query can only include one aggregation setting.
For pinpoints to show up on the map to represent the query results, the query must
include display columns that are labeled Longitude and Latitude. The points must be
stored in IBM TRIRIGA as Geographic Latitude and Longitude representing spatial
reference WKID 4326.
To show an image for your results, include an image field from your object. The field
must be labeled Image. The image displays in the table results for that column and at
the top of the Info Window.
When a query runs, if the query filter includes the triGisLatitudeNU and
triGisLongitudeNU fields to represent the Latitude and Longitude fields used to
pinpoint the item on the map, the Esri JavaScript viewer automatically adds filters to
the query to only return results within the given extents of the map. The map can
return a maximum of 1000 records. Note that the actual field names are the only way
that filters can be added. If you use fields other than triGisLatitudeNU and
triGisLongitudeNU, there is no way to add a filter.
The Basemaps section identifies the maps that are available for display. The basemap
with the lowest value in the Display Order field is displayed by default. The user can
select another basemap by clicking the Show Details tab above the map and selecting
from the Switch Basemap drop-down list. Additional information about the map
provided by the map vendor displays when the user selects the Show Details tab.
The Basemap record requires you to specify the REST URL of the Esri server that is
providing the map service. If the URL is correct and valid, the basemap service
description that is provided by the Esri server renders on the bottom of the GIS Base
Map form. If, instead of the basemap service description, nothing displays or you see
an error, confirm that your Esri server has the Basemap service enabled with REST
endpoints.
You can change the icon that shows when the basemap shows in the Switch Basemap
list in the Show Details tab of the map. When the Thumbnail URL field points to a
The Layers section identifies the layers that are available for display on top of the
basemap. When the Default check box is selected, the layer displays when the map
renders. A user adds or removes a layer by clicking the Show Details tab above the
map and selecting or clearing the check box next to the name of the layer.
When the user selects a layer with associated legend information, a column on the
right of the map lists the legend that corresponds to the layer. When multiple layers
are displayed, the legends are listed in display order. As the user modifies the extents
on the map by zooming, the data in the legends updates to reflect the correct level of
detail. If the map has a layer that is set as a default, the legend does not display until
the user clicks either the Show Details tab or the Show Table tab. This conserves
space on the map when it is viewed in smaller spaces, such as a portal. The column
that displays the legends disappears after the last layer is removed.
The Layers record requires you to specify the REST URL of the Esri server that is
providing the map service. If the URL is correct and valid, the basemap service
description that is provided by the Esri server renders on the bottom of the GIS Base
Map form. If, instead of the basemap service description, nothing displays or you see
an error, confirm that your Esri server has the Layers service enabled with REST
endpoints.
Icons. The color of a pin on the map can represent the data value. When the query
used to determine the points to display on the map is a metric query, the colors of
the pins represent where the value of each result falls within the thresholds defined
in the metric query. For example, assume the threshold defined in the metric query is
that a value of 1 through 3 is low and should have the color red representing a
negative result. If the result is 2, the pin is displayed with the icon file contained in
the Red Icon field. The Blue Icon field is used for a value that is returned by a
standard query. You can change the icons that are displayed by uploading your files
into the Icons section. An icon file can be in any format used to render on the web,
for example, a JPG file or a PNG file. If you do not change the icon files, the as-
delivered icons are used.
Widgets. You use the Widgets section to add a geoprocessor, to provide an overview
map, or to add custom widgets. To define a widget, upload the JavaScript for the
widget in the Code field.
The as-delivered Widgets section contains example widgets from Esri. These widgets
are not tied to IBM TRIRIGA and are only included for purposes of illustration. Most
widgets become available to the user by clicking the Show Details tab above the map.
With the as-delivered geocoding widget, the user can direct the map to
latitude/longitude coordinates or to an address.
The as-delivered drive time widget displays radii for 1, 2, and 3 minute drive
times from the point selected.
A user opens the as-delivered overview map by clicking the arrow that is in the
upper right corner of the map display. The sample overview map shows a
condensed version of the map and contains a pane that can be moved. The user
drags that pane to navigate much larger regions of the map without having to
zoom out.
With the as-delivered editor widget, a user can draw a line, polygon, or point on
an Esri map. The entity is saved on the Esri map if the user associated the
feature to one of the records in the query or selected the Show All Features
check box. The as-delivered editor widget is named sampleEditor.js. You can
use that file as an example of how to create custom editor widgets.
In the as-delivered sample editor widget, a feature can be associated with a
TRIRIGA ID by entering a value in one of the number fields or text fields in the
information panel. In the sample, the Issue Id field is used. You can use this same
method in another editor widget to associate a feature to a value.
A user can draw a feature and associate it to a TRIRIGA record by clicking the
Show Table tab and selecting the Associate to selected feature icon for TRIRIGA
record in the table. The next time that the record is queried, the feature
displays on the map. This method uses the TRIRIGA ID as the common ID between
TRIRIGA and Esri.
When enabled, the as-delivered editor widget interferes with the bubble marker
information provided by the pins. At the time of this release, there is no known
workaround.
With the as-delivered proximity widget, when a user clicks a point on the map,
the server processes a distance radius from that map. The data displayed on the
map changes to only show buildings that fall within that proximity boundary. The
proximity boundary is determined by the server.
Widget Group Access. Identify the security groups that are authorized to access
widgets in addition to members of the Admin Group. Users in one of the listed
security groups or in the Admin Group can see a widget when the Add Security check
box in the Widgets section is selected.
Step 7 Click the Preview tab to see the map that is configured.
A user can save the basemap and extents of their current view by clicking the Show
A user can see the data that is represented on the map in a table by clicking the Show
Table tab. When the user selects a row in the table, the map zooms to the point on
the map corresponding to that record and centers there.
When viewing the table results, you can click the Export link in the upper-right corner
of the table to download the results as a tab delimited text file.
The value in the Constructed URL field on the Preview tab and the System tab is the
URL for the map that is displayed in the Preview tab. You can copy this value to paste
into a Location record to tie it to the map.
The first time that any user loads any map, the software looks for these two files and
connects to the service defined by the file that is in the EsriJS ClassLoader record. If
neither file is present, a warning message is displayed. If both files are present, the
online server is used.
To navigate to the EsriJS ClassLoader record, go to Tools > System Setup > System >
Class Loader.
Step 1 Go to http://js.arcgis.com/3.3/ and copy that file. It defines the service to which you
subscribed. Rename the downloaded file EsriJS_arcgis.js. The Esri JavaScript
Step 2 Open the EsriJS ClassLoader form and add a resource file named
EsriJS_arcgis.js. Upload your EsriJS_arcgis.js file. Save the record.
Step 3 In the Navigation Builder, search for and add the GIS Map Manager Query to your
navigation. You use this query to access the configuration manager for GIS.
Step 4 Refresh your screen and navigate to the GIS Map tab that you just added. Open the
Default Map record. Under Basemaps, click Find and bring in one of the default maps
provided by Esri.
Step 5 Save the record. Click the Preview tab. The Preview tab shows the basemap you
added. Confirm that the map is displayed.
Step 6 To display the map in tab sections and portal sections, copy the Constructed URL field
and paste it into the Custom and External URL fields in records where the map is to
display. The value for the Default Map is
/html/en/default/rest/EsriJS?map=Default Map.
Step 1 Go to
http://www.esri.com/apps/products/download/index.cfm?fuseaction=download.all
and download the ArcGIS API for JavaScript v3.3 API (arcgis_js_v33_api.zip). Extract
the files from that compressed file.
Step 5 Open the EsriJS ClassLoader form and add a resource file named
EsriJS_API_3.3.zip. Upload your EsriJS_API_3.3.zip. Remove the
EsriJS_arcgis.js file from the resource files. Save the record.
Step 6 In the Navigation Builder, search for and add the GIS Map Manager Query to your
navigation. You use this query to access the configuration manager for GIS.
Step 7 Refresh your screen and navigate to the GIS Map tab that you just added. Open the
Default Map record. Under Basemaps, click Find and bring in one of the default maps
provided by Esri.
Step 8 Save the record. Click the Preview tab. The Preview tab shows the basemap you
added. Confirm that the map is displayed.
Step 9 To display the map in tab sections and portal sections, copy the Constructed URL field
and paste it into the Custom and External URL fields in records where the map is to
display. The value for the Default Map is
/html/en/default/rest/EsriJS?map=Default Map.
Step 2 Find the new API and add it to the Resource Files section.
Step 3 Remove the old API from the Resource Files section.
The rate at which the map refreshes when the user zooms in and out depends on the speed of the
user’s Internet connection. Each time a user moves the map, a call is made to respond and redraw
the map. The user’s Esri JavaScript viewer handles these actions. The time it takes to redraw the
map depends on the speed of the network that is used. If you use an online service such as Esri, the
commands are communicating over HTTP. If you use an in-house Esri server, your intranet
determines the latency.
The labels within the bubble markers and in the table are defined by IBM TRIRIGA queries.
To put a marker on the map, the data must contain fields that are labeled Latitude and Longitude.
And the query that is used to display the data must pull the Latitude and Longitude fields from the
data. When both conditions are met, the point that represents the Latitude and Longitude is marked
on the map.
To display a location’s image in the bubble markers, the location data must contain a field that is
labeled Image. And the query that is used to display the data must pull the Image field from the
data. When both conditions are met, the Image displays on the map.
If locations do not appear on your GIS map, the cause might be one of the following:
The locations are not geocoded. In order for a location to display on the map, the record
must be geocoded.
Your query does not return results.
Your ArcGIS server is down or not responding.
Use these standards for IBM TRIRIGA data to work properly with Esri:
Duplicate records in the table are caused by a data issue with your hierarchy structures. GIS uses a
flattened hierarchy table. You can rebuild the flattened hierarchy structure.
2. Select Utilities > Hierarchy Structures. The Hierarchy Structure Manager displays all
hierarchies that are defined in the flattened hierarchy table.
3. For each of the following three hierarchies, click the hyperlinked name, then click the
Generate Data link: All Geographies, Building Spaces, and Buildings and Land.
If your GIS section contains a blank white screen instead of a map, check the following conditions:
If a message indicates that you do not have a GIS license, make sure your IBM TRIRIGA license file or
files are up to date.
You define the URLs and ports that are used for creating the GIS map. You define basemaps, layers,
and widget services such as the geometry service. The only exception is the Esri JavaScript API
sourced from the Esri CDN. For more information about the services that are used with that API, see
http://esri.com. The offline API is self-contained and can be used behind your firewall.
The fields, sections, and tabs in the Integration Object form hide and show
dynamically as you enter data into the form. You are presented with only the fields,
sections, and tabs that are pertinent to the integration you are defining. The general
process flow is to create the Integration Object record and then run that record. After
the integration process completes, the record includes a section that contains the
history of what happened when you ran the integration. Also, the record includes a
tab that contains the records that were affected by the response. If there are any
failures, you can find a detailed description of the failure and a method you can use
to manually edit the failed step and resubmit it.
Step 1 Go to Tools > System Setup > Integration > Integration Object. Click the Add action.
An initial Integration Object form has two sections in the General tab, the General
section and the Execute History section. The fields in the General section identify
how the integration you are creating functions. The following table describes these
fields.
Name The published name. The value of this field must be unique.
Scheme The payload, protocol, and transport for the data. The four scheme
types are: Database, File, File to DC, and Http Post.
Direction The direction the integration travels from the view point of IBM
TRIRIGA. The two options are Inbound and Outbound. Each option
dictates the elements that are displayed in the form and that can be
configured for the integration.
Debug If you select the Debug check box, debugging is enabled for the
integration. Using the Debug check box avoids having to bounce the
server or to enable debugging with the Administrator Console. When
active, debugging logs verbose data to the server.log file.
After you create the record, use the Execute action to start the integration. This
action runs the triIntegration - Execute workflow, which is described in more
detail here. The Integration Object record becomes read only and the status changes
to Processing. When the integration processing is complete, if there were no errors,
the Integration Object record transitions back to the Ready state. If an error occurred
during processing, the status displays as Failed, and the object is not transitioned.
You must then manually inspect the errors that occurred.
You can override the inspection and transition the record back to the Ready state by
clicking the Complete action. The Delete action is available, but hidden.
You also can take advantage of the Copy action to duplicate the Integration Object
instance.
When you use object migration to move Integration Object instances from one
environment to another, before you use an Integration Object in the new environment
you must open any Integration Object and run the ReMap action. The ReMap action
updates all of the IDs that have been saved in your data maps and response maps.
These maps contain IDs for modules, business objects, forms, and smart record data.
It is important that you run ReMap to be sure that the IDs are updated for the new
environment.
The following lifecycle diagram shows the state transitions for an Integration Object
record.
The records in the Execute History section contain the details about each integration.
Each detail record contains the status, processing counts, and duration of the run.
When an integration has errors, it also includes a log of the errors and a query section
that lists the individual records that failed.
When the Integration Object is triggered from a workflow, the Integration Object
record that contains the full overview of errors also contains a truncated message in a
hidden text field that can be displayed to the user, for example in an Attention
message.
Note – The Integration Object for outbound objects can process a maximum of 1000
records at one time. If there are more than 1000 records to be exported, create your
workflow logic to run the Integration Object until all records have been processed.
The Map section can have two native IBM TRIRIGA fields: Integration Map and Last
Updated. The rest of the page uses the servlet proxy technology to display data
during communication with the data source through the IBM TRIRIGA Connector for
Business Applications.
The Integration Map field is a binary field that stores the mapping that you define for
how data is to be imported. The file is stored and parsed as a JavaScript Object
Notation (JSON) object that contains the information that is collected from the Data
Map tab. This data is stored as application instance data.
When you select the form, a tree representation of the form is displayed.
The Default Action field is used if the record to be imported cannot be found and
must be created. If the record exists, the default action is ignored and the record is
updated. The actions that are in the Default Action list are the transitions from the
null state that are available for the record.
The form is displayed in a tree format with icons representing the type of object of
the section or field in the tree. The hierarchy of the form tree is Form -> Tab ->
Section -> Field.
Form
Tab
Smart Section Triggers a popup to define how to identify the data element. There
is more information about using smart sections here.
Locator Field Acts as a standard field, but the integration fails if the data cannot
be located or is not unique
Date Field Not displayed in the tree. You use the table to the right of the form
tree to define the date format for the incoming data, so that it can
convert to IBM TRIRIGA date format correctly.
Strike Read Only field A field with a strikethrough cannot be mapped, but it can be used as
a key field to help identify the record.
Red Required field It is required to create the record. This integration record fails if it is
not mapped.
In the form tree, the labels display with names in parentheses. When you click a form
name, tab, or section, the element is expanded. If it is a field, it is added to the table
of columns on the right of the page. If it is a field from a smart section, you use the
popup window to further identify the element.
Field Description
Base Parent The Base Parent field is used for hierarchy objects, such as Location,
Organization, and Geography, to identify the root of the hierarchy.
To ensure that the record is created under the correct root of the
hierarchy, you must specify the path in the Base Parent field. If you
do not specify the path in the Base Parent field, the record is
created at the same level as the root and you cannot see or get to
the record from the form.
Type The metadata definition of the system field type. All fields are
treated as strings.
Example: List
External The name of the external field that you want to map to IBM TRIRIGA.
Because these values can be used for database columns or for
formatted files, do not use spaces, special characters, or numbers.
Example: User_Language
Note – When this column is used in the Response Map tab,
the External field represents either nothing, or an xpath
string, or a jsonPath string.
isKey To update records in IBM TRIRIGA, you need the record ID to identify
the record. If you select isKey, the value for that row of data is used
as a filter for the queried object so that exactly one record ID is
returned. If no record IDs or multiple record IDs are returned, the
Default You can use the Default field to specify constant values to be applied
for all instances of the record. If the Default field is populated, the
External value is ignored and the integration assumes that the
column exists only in this mapping.
Example: US English
If you specified the Database scheme, you must run the Generate
SQL for Table section action. Default values are set at the staging
table ddl and are ignored at run time. Default values are only used
at run time for File or Http Post schemes.
Note – After your mappings are complete, you must click the Save Map action. If you click
either the Save form action or the Save & Close form action, your mappings are lost; they
When you save the map, the data you specified is collected and stored in the Integration
Map field in the Map section. The format of the field is a JSON object in a text file.
Using Smart Section Fields
A smart section is a section that is used to link to another business object. The data
that is displayed on the primary record reflects data in a different object. A smart
section field is a field that is linked to a smart section. For more information about
smart sections, see the Application Building for the IBM TRIRIGA Application Platform
3 book.
Each smart section field represents a link to another business object. When you select
a field, you use the popup window to select the form to use as a filter. Next choose
the field to be used as a filter to retrieve the record ID.
In the smart section popup window, if you select the Should this record fail if the
association cannot be made check box and the lookup for the record ID cannot find
exactly one record, the submission of the record fails. If this check box is not selected
and the record ID cannot be found, a warning is written to the log, but the record is
created or updated.
Selecting a Query
Outbound integrations can be defined using either a query from the IBM TRIRIGA
Report Manager or a dynamic query.
Step 2 Either select a business object and a query name, or select Module Query and enter
the query name.
Step 3 Select the action name for the action to be performed on the record after it has been
exported. For example, you can call a workflow to flag the record to not be exported
again. If you leave this field blank, no action is triggered.
Step 4 Optional: Select to include the record ID. You can use the query label as an element.
For XML-defined exports, there is a binary field for use with an XSLT to do
transformations. The record ID adds the field TRIRIGA_RECORD_ID for file outputs
and adds recordId for XML and JSON exports.
Dynamic Query
If you use a dynamic query, the Integration Object form changes. The query name
elements in the Query For Outbound section disappear and the Data Map tab is added
to the form. You use the Data Map tab to create a map to dynamically call your
query. In this use of the Data Map tab, instead of defining how inbound data is to be
mapped, you are selecting the fields to be exported and the labels represent the
external data source to which the fields are mapped.
You use a dynamic query to specify default data for fields. For example, if you use
the Http Post scheme and need to pass a static parameter or value. For default data
to work correctly with a dynamic query, you first define your data map with the
default data set and then you use the Generate SQL for Table section action. This
sets the column to use the default data for new inserts.
You must use a dynamic query if you are using the Database scheme and need to
export blobs. You can select binary, note, or Document Manager content fields for
export.
Schemes
A scheme defines the payload, protocol, and transport for the data. The following
table summarizes each scheme and includes a hyperlink to more information.
Database Uses a database table for importing or exporting data Yes Yes
Database Scheme
The Database scheme uses a database table to import or export data. When you
select the Database scheme, the Integration Object form expands to open the
Database section, in which you add or find data sources that can connect to your
database.
When you select the Database scheme to export data, you use the Query for
Outbound section to define the data to be exported.
Field Description
Datasource Name The Datasource Name is a locator field that represents connections
to various databases. You use this field to predefine data source
access with a username, password, and connection string that can be
reused for multiple Integration Objects. You can connect to as many
databases as needed.
Click the search icon to see the list of existing data sources. From
this list, you can add new data sources as needed. The name of the
data source is used as a filter to find the correct data source when
the Integration Object runs.
Table Name The name of the table from which the integration pulls data
(inbound direction) or to which the integration exports data
(outbound direction).
Action Description
Test DB Connection You select the Test DB Connection action to verify that the server
can communicate with the database. The table name is used to run a
select 1+1 from [table name] query to the database. The
color of the Database section header changes to red if an error
occurred. To view the errors, review the server logs in the IBM
TRIRIGA administrator console. For information about the
administrator console, see the IBM TRIRIGA Application Platform 3
Administrator Console User Guide book.
Generate SQL for Table You select the Generate SQL for Table action to create generic SQL
to define your database table. Enter the name of the new database
table and complete the mapping on the Data Map tab before clicking
this action.
Each table used for inbound transactions must have the following
columns:
Generate Test Data This action is only available in the inbound direction. You select the
Generate Test Data action to load your database with test data.
After you create the database table and can connect to it, you can
load your database with test data. You use the test data to run
simple functional testing to verify that the mappings are correct and
that the integration process works correctly. Enter the number of
rows to be inserted into the database table in the Test Rows field.
The data generated is created as alphanumeric values. If you specify
a default value on the Data Map tab, that value is inserted for all
rows. This is useful when the field might be a locator field, a number
field, a date field, or a list that requires a specific value.
Blobs are supported for both inbound and outbound transactions in the Database
scheme. The types of blobs supported are binary fields, note fields, and Document
Manager content. Image fields are not supported. You can only export blob fields if
you use the Is Dynamic option in the Query For Outbound section. When importing or
File Scheme
The File scheme imports from standardized delimited flat files or exports to formats
that include standardized delimited flat files and JSON, XML, or XSLT files. You can
read from or write to anywhere on the network that the server can access.
When you select the File scheme, you use the File section to define the file. This
section has different elements depending on whether the direction is inbound or
outbound. This table describes the elements used for both.
Element Description
Delimiter The delimiter used for the file. The three options are Tab \t, Pipe
|, and Double Colon ::.
Test File Access You select the Test File Access action to determine if the
integration object can connect to the file. After the connection is
established, a workflow runs that attempts to read the file. The
workflow also attempts to create a file in the directory specified,
write to it, and then delete it. If everything is successful, then the
color of the section bar changes to green and the text on the section
bar changes to File Access Successful.
When you select the File scheme to import data, you can manually provide a binary
field that is a copy of the integration field to be imported. You can use a manual field
to assist in testing, so that localized processing can be repeated without overwriting
the import file.
After a file has been imported, it is renamed with the processing date and moved to a
folder named processed.
If there are multiple workflow agents running in your IBM TRIRIGA environment, you
must verify that either all servers have access to the file location or be sure that
there is a copy of the file on all servers. This is because the workflow that runs during
When you select the File scheme to export data, additional fields in the File section
provide information to the integration.
Field Description
Overwrite If you select the Overwrite check box, the file to be exported is
named the File Name and overwrites any existing file with the same
name. If this check box is cleared, the current timestamp is
appended to the name of the file.
Export Type You choose the format of the outbound file. You can select the Flat
option for standard delimited files, JSON, or XML. JSON and XML
adhere to the guidelines in the Outbound Formats section of this
document.
File Header This field is only available when the value of Export Type is Flat. If
you enter a value in the File Header field, the text entered into this
field is written to the export file before any other data is written to
the file. For example, if you need to add copyright or processing
instructions, you could place it here and the data in this field will be
printed before the header columns are printed.
File to DC Scheme
The File to DC scheme imports delimited files into IBM TRIRIGA DataConnect staging
tables. Use the File to DC scheme if you do not have access to, or the training
required to use, an ETL tool. With this scheme you can use workflows to process and
validate data, giving you additional control over error handling.
For information about DataConnect, see the DataConnect chapter in the Application
Building for the IBM TRIRIGA Application Platform: Data Management book.
When you select the File to DC scheme, the Integration Object form expands to open
a Database section, a File section, and a DataConnect section, as well as adding a
Data Map tab.
After you define your mapping, save the map, and run the integration object, the
entries are inserted into the DataConnect staging tables with an upsert action. You
must select one or more fields in the Data Modeler as a key. DataConnect uses the
keys to determine if it needs to do an insert or an update to the row.
Field Description
Datasource Name The Datasource Name is a locator field that represents connections
to various databases. You use this field to predefine data source
access with a username, password, and connection string. You can
connect to as many databases as needed.
Click the search icon to see the list of existing data sources. From
this list, you can add new data sources as needed. The name of the
data source is used as a filter to find the correct data source when
the Integration Object runs.
You must select a data source object that gives you access to the IBM
TRIRIGA S_ staging tables.
Table Name The name of the table from which the integration pulls data. For
testing, you must use the value DC_JOB.
The File to DC scheme is completely separate from the TRIRIGA internal APIs and uses
the Connector for Business Applications to communicate with TRIRIGA. However, the
API does not allow for inspection or manipulation of the DataConnect-related
information. The File to DC scheme has code that goes directly against the database
to inspect, read, and write to.
The following summarizes the access that is required to the tables needed for the File
to DC scheme. The tables other than DC_JOB and the S_ tables are used to determine
the columns available for the DataConnect objects and must have Read capabilities
for the user selected in the data source.
IBS_SPEC_TYPE_STAGE X
IBS_SPEC_TYPE X
SYS.COLUMNS (MSSQL) X
SYS.TABLES (MSSQL) X
IBS_MODULE X
ALL_TAB_COLUMNS (ORACLE) X
DC_JOB X X
S_ tables X X
File Section
You use the File section to define the path on the computer or network to the file you
want to import.
Element Description
Delimiter The delimiter used for the file. The three options are Tab \t, Pipe
|, and Double Colon ::.
Test File Access You select the Test File Access action to determine if the
integration object can connect to the file. After the connection is
established, a workflow runs that attempts to read the file. The
workflow also attempts to create a file in the directory specified,
write to it, and then delete it. If everything is successful, then the
color of the section bar changes to green and the text on the section
bar changes to File Access Successful.
You can manually provide a binary field that is a copy of the integration field to be
imported. You can use a manual field to assist in testing, so that localized processing
can be repeated without overwriting the import file.
After a file has been imported, it is renamed with the processing date and moved to a
folder named processed.
If there are multiple workflow agents running, you must verify that either all servers
have access to the file location or be sure that the file is on all servers. This is
because the workflow that runs during the Execute process for the integration needs
to have access to the file and path specified.
DataConnect Section
Field Description
Business Object The business object that this integration uses to trigger the workflow
that is tied to DataConnect. The business objects are in the
triDataConnectJob module.
DataConnect Type 1 – Creates one job for the one business object selected.
Standard
When this integration object runs, the process flow is
IntegrationObject Dependent This field is used only when the value of DataConnect Type is 2 –
Multi. It identifies the locator field to the other Integration
Objects.
In the File to DC scheme, you use the Data Map tab to define the mappings between
the delimited file uploaded and the columns in the staging table defined for the
object you select.
The list of modules is limited to the modules that have business objects that have
been identified as Has Staging Table in the Data Modeler. The list of business objects
is specific to those that are marked for Has Staging Table.
When a business object is selected, a query runs and displays the database columns
available in the staging table associated with the business object. The name of the
staging table is populated in the Staging Table field and is read only. You must specify
a value in the Form field for the process to work correctly.
The list on the left side shows the fields defined in the Data Modeler as staging table
fields. The elements that are grayed out with a strikethrough are used by the
automatic process and cannot be mapped. The other fields are available and when
clicked add another row to the table on the right. The External column defaults to
the column name, but you can change it to match your file header. The isKey,
isParent, and Default columns are read only and are not used.
Because you are mapping your file to the staging database columns, the first row of
the file must have columns that match the External column. The sequence of the
columns does not matter, but the column names are case sensitive. For example, if
the External columns in the map are listed as Field1, Field2, Field3, then the actual
external columns can be Field2, Field1, Field3. As long as the names are identical, the
fields are applied correctly.
After your map is complete, you must click the Save Map action. If you click either
the Save form action or the Save & Close form action, your mappings are lost. When
you save the map, the data you specified is collected and stored in the Integration
Map binary field in the Map section.
You use the Http Post section to define the server to which the data is sent.
Tokens can be added to the values of the Http URL, Http URI, and Headers fields. A
token is a value that is sourced from the query results for this post. The token name
must exactly match the column label in your query. For example, to add the value
Field Description
Post Type The format of the post. If you select XML format, the Query for
Outbound adds an XSLT for further transformation.
Response Type The format for the response sent for each request. When selected,
the integration expects the response to match up to fields defined in
the Response Map tab.
}
},
{
"address" : "6721 Via Austi Pky, Las Vegas, NV,
89119",
"location" : {
"x" : -12819812.309700001,
"y" : 4309994.186999999
},
"score" : 79,
"attributes" : {
}
}
]
}
Http URL The dynamic portion of the location of the server receiving the data.
For example, when there is a production server and a test server,
the value of the Http URL field changes to identify the server to be
used and the value of the Http URI field does not change.
Http URI The static portion of the location of the server receiving the data.
For example, when there is a production server and a test server,
the value of the Http URI field does not change; however, the value
of the Http URL field changes to identify which server is to be used.
Content-Type The format of the request header used to specify data in the body of
an entity.
Omit Request Entity If selected, the body of the request is excluded from the request.
Send As Batch If selected, the records identified in the query are sent in a group
instead of one at a time.
Headers You can add custom headers to be sent with the request in the
format {name}:{value}. For example, to send userID with the
value 12345 in the header, set the value of this field to
userID:12345.
You use the other fields in the Http Post section to define the security.
Field Description
UserName Parameter This option only affects the Http Post scheme when the UserName
field has a value.
If your post does not require user name and password parameters,
the UserName Parameter field must be blank.
UserName If your post does not require a user name and password, the
UserName field must be blank.
Add To Header This option only affects the Http Post scheme when the UserName
field has a value.
Use Auth Basic This option only affects the Http Post scheme when the UserName
field has a value.
If selected, the UserName field and the Password field must have
values. When selected, an Authorization Basic encoding entry is
added into the header of the request going out.
The user name and password are concatenated with a colon : and
then Base64 encoded. This is a standard security protocol.
Use MaxAuth This option affects the Http Post scheme only when the UserName
field has a value. If selected, the UserName field and the Password
field must have values. When selected, a MAXAUTH entry is added to
the header of the request that is going out. The UserName and
Password are concatenated with a colon : and then Base64 encoded.
This is the security protocol for IBM Maximo.
If the UserName field has a value and both the Use Auth Basic check box and the Add
To Header check box are selected, the scheme uses Use Auth Basic.
If the UserName field has a value and the Add To Header check box, Use Auth Basic
check box, and Use MaxAuth check box are all checked, the Use MaxAuth check box
takes precedence.
If the UserName field has a value and neither the Use Auth Basic check box nor the
Add To Header check box is selected, the values in the UserName Parameter field and
the Password Parameter field are added as post variables of the request going out.
The values of each are the corresponding UserName and Password fields. No encoding
is provided.
The Response Map tab maps the response parameters from the request to an existing
IBM TRIRIGA record. The Response Map tab organization is exactly like the Data Map
tab. It is used differently than the Data Map tab, and response mapping is only
allowed for simple field values.
Outbound Formats
JSON You can select JSON as the Post Type for the Http Post scheme or as
the Export Type for the File scheme. When you select JSON, the
Query for Outbound section displays the Include Record ID check
box.
The format of the JSON object has two objects: data and header.
The data object contains an array of objects containing the label
name and value for the columns from the outbound query and also
includes the boId and recId if specified.
XML You can select XML as the Post Type for the Http Post scheme or as
the Export Type for the File scheme. When you select XML, the
Query for Outbound section contains the Include Record ID check
box, the XSLT binary field to contain the XSLT for any customized
export formats, and the Use Query Label As Element check box,
which determines whether the export uses the label from the query
as the XML node.
The default XML structure includes three nodes for each column:
field, label, and value. The following example shows this default XML
structure:
<query>
<continueToken/>
<results total="13">
<result recordId="11430080" associatedRecordId="null"
boId="106402">
<columns>
<column>
<field>triIdTX</field>
<label>HR_ID</label>
<value>1000000</value>
</column>
...
</columns>
</result>
...
</results>
</query>
If the Use Query Label As Element check box is selected, the default
XML structure changes. The label and value nodes are merged by
using the label as a node. The following example shows the default
XML query results for dates include the raw values (the value stored
in database) and the display value(the formatted value displayed to
the user) and function as follows:
The following example shows the XML structure when the Use Query
Label As Element check box is not selected. displayValue is added
to the XML results.
<column>
<field>Date</field>
<label><![CDATA[Date]]></label>
<value><![CDATA[1359964800000]]></value>
<displayValue><![CDATA[02/04/2014]]></displayValue>
</column>
<column>
<field>DateTime</field>
<label><![CDATA[Date_Time]]></label>
<value><![CDATA[1360009800000]]></value>
<displayValue><![CDATA[02/04/2014
12:30:00]]></displayValue>
</column>
The following example shows the XML structure when the Use Query
Label As Element check box is selected. [name]_display is added
to the XML results.
<column>
<field>Date</field>
<Date><![CDATA[1359964800000]]></Date>
<Date_display><![CDATA[02/04/2014]]></Date_display>
</column>
<column>
<field>DateTime</field>
<Date_Time><![CDATA[1360009800000]]></Date_Time>
<Date_Time_display><![CDATA[02/04/2014
12:30:00]]></Date_Time_display>
</column>
Either - In the XSD Location field, type the URL to a public hosted
If schema errors are found during the validation, the errors are
logged in the Integration instance record. If the XSD Location field is
empty, no schema validation occurs.
When the Validate Only check box is selected, the records are not
submitted. Instead, the information is collected, formatted, and
validated against the schema. If no errors are found, the Integration
Object record returns to the Ready state and the Execute History
section shows the process and that the record count is zero. If errors
are found, the Integration Object record is in the Failed state and
the Execute History section shows the process and a count of the
errors found. The error messages are captured in the instance
record.
The Http Post scheme can call out to a URL and retrieve (and
process) multiple records in the response. This feature is only
available when the value of Response Type is XML. No query is
needed; however, if you do submit a query, it should be limited to
one record only as multiple records will make multiple calls. If you
do not specify a query, you must set the Request Type to empty.
In the Response Map external field column, you must represent the
section of the XML that is repeatable with the token [i]. If you set a
key, it will update the record based on that. In the following
example, to map to the Name node in the XML, you would set
//result[i]/columns/column/Name, and to get the recordId
attribute you would set //result[i]/@recordId.
<?xml version="1.0" encoding="UTF-8"?>
<query>
<continueToken/>
<results total="6">
<result recordId="11464082" associatedRecordId="null"
boId="10025526">
<columns>
<column>
<field>triNameTX</field>
<Name><![CDATA[Default Map]]></Name>
<Name_display><![CDATA[Default Map]]></Name_display>
</column>
<column>
<field>triIdTX</field>
<ID><![CDATA[001]]></ID>
<ID_display><![CDATA[001]]></ID_display>
</column>
</columns>
</result>
<result recordId="11464531" associatedRecordId="null"
boId="10025526">
<columns>
<column>
<field>triNameTX</field>
XSLT The XSLT field is a binary field that can be used to house the style
sheet to convert the default XML format from the outbound query to
the format that your interface requires. This field is not required and
if left blank exports the values from the query with the default XML
structures. If the XSLT field is populated, you can use either the
default or query label as element options to pass to your XSLT to
further process or transform your data. Selecting the Use Query
Label As Element check box helps to identify the columns in your
XSLT file for processing.
You can find a utility for testing XSLT in real time at the following
link:
http://www.w3schools.com/xml/tryxslt.asp?xmlfile=simple&xsltfile=
simple
For example, you made changes to a subset of records, but you only want to send a
record when a user clicks an action. You can use a special Custom task object built
into the Integration ClassLoader that uses workflow variables to set data. By using this
feature, you can build out the subset of records in the workflow using the common
methods and then pass the subset by reference to the Custom task and assign the
Integration Object to trigger to a workflow variable named IntegrationObject.
The workflow below demonstrates the absolute minimum to enable this functionality
to work.
The params arguments are the assigned IntegrationObject variable so that the process
has the instructions it needs to continue the integration. The records argument is the
Records section in the Custom task where you assign the records to use for the
workflow process.
Instead of triggering the event on one object, you are passing two sets of objects to
an event.
The Parameter looks for the Integration Object from the IntegrationObject variable
and processes the results in the records argument passed in by running the query
specified in the Integration Object Query for Outbound section and filtering by the
record IDs of the records passed in. By doing this, you are exporting with the common
utilities, but you are no longer bound by all or nothing queries or by triggering the
event directly from the Integration Object Execute action.
In the Query task named Query For Integration Object in the example, you query for
the Integration Object that you want to trigger and then filter the results of that
query by name. You must have only one result.
In the Variable Definition task named Define IO as Variable in the example, you define
the Integration Object as a variable. The result of the query in the previous task is
now assigned to this variable.
In the Variable Definition task named IntegrationInstance in the example, you define
the Integration Instance object as a variable. This variable is used for the return value
from the Custom task at the end of the example.
In the Query task named Query for subset of people in the example, you get the
filtered set of records to be processed for the integration. The task calls a query
named triEmployee – Find and filters for records where the value of the
triFirstNameTX field contains Rodrigo.
For any business object that is used in the Query that is passed to the Integration
Object, you must include the triRecordIdSY field in the business object definition.
At runtime, the Integration Object uses the triRecordIdSY field to retrieve the
remaining fields in the object defined in the Data Map.
In the Custom task named Custom Task in the example, you pass the results of the
Query for subset of people task as the list of records. The Class Name is
Integration:com.tririga.custom.integration.Parameter to define the
ClassLoader object named “Integration” and the path to the implementation class
named “Parameter” that understands how to receive and process this information.
Triggering an event externally from a URL that includes the credentials can only be
performed on IBM TRIRIGA Application Platform version 3.3 and later.
You can add the credentials to the Http Request Header in the form of Basic
Authorization.
You can add USERNAME and PASSWORD parameters with their values to the
header of the Http Request in plain text.
You can add the USERNAME and PASSWORD parameters with their values to the
query string of the Http Request or as Post parameters, in plain text.
To trigger an Integration Object externally, you set the credentials and then pass the
added parameter of ioName in the query string with the name of the Integration
Object. For example, to trigger a Geocode Address Integration Object, you would call
the following URL:
http://localhost:8001/html/en/default/rest/Integration?USERNAME=
system&PASSWORD=admin&ioName=Geocode +Address This returns
Successful if it was able to trigger the action on that record, or it returns the error
message if there was an error.
Any query defined in the Report Manager can be executed from a web address
originating outside of IBM TRIRIGA. To specify the query, you use following
parameters:
module Required if no continue The name of the module of the query you are calling.
token used
bo Optional The business object for the query. If there is more than one,
omit this parameter.
Error Handling
If there are failures during an integration, the record is not saved. The integration
summary displays the errors that occurred, and each record that failed is represented
as a Failure record. A Failure record contains an instance record representation that
you can manually edit and resubmit.
The Resubmit Record field in an Instance Failure record is a note field that contains
the key value pairs that represent the business object that you were trying to create
or update. You can manually edit the data in the Resubmit Record field and then click
the ReSubmit form action to resubmit the record.
When a resubmitted record completes successfully, the name of the record changes
from Failure to Successful and the text in the Error Message field and the Resubmit
Record field is cleared. The Integration Instance records counts are updated to reflect
the correct values, that is, the number in the Records Successful field is increased by
one and the number in the Records Failed field is decreased by one. When the Record
Failed count is equal to zero, click the Complete action on the Integration Object
form.
Geocode Example
You can set up an Integration Object record that uses the Http Post scheme to export
information to an Esri server and map data from the response back to the Location
records to update the geocodes.
Note that the Http URL in this example provides Esri geocoding services through their
REST API. The Esri geocoding service expects the request to contain parameters in the
string to know what addresses to geocode. The response is in JSON format. The web
address in this example is
http://geocode.arcgis.com/arcgis/rest/services/World/GeocodeServ
er/findAddressCandidates
You use the Data Map tab to define the fields to be extracted from IBM TRIRIGA. The
values in the External column are the names used for the parameters added to the
query string. In the example, there are two additional parameters, outSR and F, that
specify the WKID and format that the Esri service expects. The other fields are
dynamically pulled from the Location records in IBM TRIRIGA.
The Response Map tab handles the response from Esri and maps the latitude and
longitude to the Location record. Note that since the response is in JSON format, the
syntax in the External columns is in JSONPath.
}
},
{
"address":"6720 Via Austi Pky, Las Vegas, NV, 89119",
"location": {
"x":-12819744.862808136,
"y":4309924.3334144857
},
"score":100,
"attributes": {
}
},
{
"address":"6721 Via Austi Pky, Las Vegas, NV, 89119",
}
},
{
"address":"Via Austi Pky, Las Vegas, NV, 89119",
"location": {
"x":-12819804.948472099,
"y":4309871.0261052754
},
"score":100,
"attributes": {
}
}]
}
4. The values in the Response Map tab specify that the integration needs to extract
candidates[0].location.y and candidates[0].location.x from the JSON object
and map them to the triGisLatitude and triGisLongitude fields in IBM TRIRIGA.
5. The Location record is updated with the new data.
6. Since no action was specified for the data being sent out of IBM TRIRIGA in the Query for
Outbound section, no actions are triggered on the Location data.
7. The final tally of the integration process is collected and an Execute History object is created
with the information and any errors that may have occurred.
Additional Resources
Upgrading the TRIRIGA Integration Object
When a new installer is available for the IBM TRIRIGA Application Platform, you can
specify whether the platform installer is to update the TRIRIGA Integration Object
when it runs. To signal to the platform installer that it should not upgrade the
TRIRIGA Integration Object, before starting the platform installer, create an
Integration Object record named IGNORE_UPGRADE. Do not execute the record; the
record only has to be present in the system. Then, when the platform installer runs, it
does not update the Integration Object even if the platform installer build is newer
than the current installed platform version.
If you do not create an Integration Object record named IGNORE_UPGRADE, when the
platform installer runs, it updates the Integration Object if the platform installer
build is newer than the currently installed platform version.
Object Description
Integration ClassLoader The Integration Object requires the use of a TRIRIGA ClassLoader
instance object named Integration. The Integration ClassLoader
object is made up of three elements in the Resource Files query
section. They are TRIRIGAIntegration.jar,
TRIRIGAIntegration_Assets.zip, and jtds-1.2.5.jar.
Standard Workflows
Workflow Description
triIntegration - Asynchronous
Execute
This workflow is triggered when the user selects the Execute action.
This is the primary workflow. The following diagram shows the
standard workflow:
The workflow controls the status displayed to the user. The Trigger
Integration task is a Custom workflow task that calls
Integration:com.tririga.custom.integration.Integrati
on and is the primary entry point for all Integration Objects. The
information about what to do during the integration is defined in the
Integration Object and is passed by record ID to the Custom task.
triIntegration - Synchronous
Generate SQL for Table
This workflow is triggered from the Database section action to
generate the SQL for use with your staging tables.
triIntegration - Synchronous
Generate Test Data
This workflow is triggered from the Database section action to
generate randomized data that populates your staging tables to be
used for testing, such as functional testing or load testing.
triIntegration - Synchronous
HideShow Data Sections
This workflow is triggered from the initial loading of a new
Integration Object as well as various elements on the form as an
OnChange workflow. This workflow shows and hides the form
elements as needed for the integration you are defining.
triIntegration - Synchronous
PreLoad
This workflow is called when a new Integration Object is opened. It
calls the triIntegration – ResetMetaData and
triIntegration – HideShow Data Section workflows.
triIntegration – Asynchronous
Resubmit
This workflow is triggered when you resubmit a failed record from
the triIntegrationInstanceFailure record.
Standard Queries
Manager Default - Integration Objects
triIntegration - get Instances
triIntegration - getIntegrationObject
triDatasource - getIntegrationObject datasource
triIntegrationFailures - Get all failures
User Guides
See the following IBM TRIRIGA user guides for more information about the IBM TRIRIGA
Application Platform, such as workflows, the Data Modeler, business objects, forms,
queries, state transitions, the IBM TRIRIGA Connector for Business Applications,
ClassLoaders, and Servlet Proxy.
Application Building for the IBM TRIRIGA Application Platform 3
Application Building for the IBM TRIRIGA Application Platform 3: Data
Management
IBM TRIRIGA Application Platform 3 Connector User Guide (this book)
IBM TRIRIGA Connector for Business Applications 3 Technical Specification
With IBM TRIRIGA connectors, you can write extended functionality and distribute
that functionality in an object migration package. Connectors use the ClassLoader
business object and resource files, and custom workflow components such as
CustomTask, CustomParameters, and CustomTransitions. The servlet proxy is an
extension of class loaders. The servlet proxy gives a handle to the Java IBM TRIRIGA
Connector for Business Applications (CBA) API. CBA uses Java servlet-style
programming for integration into external systems with custom form components.
Before you create IBM TRIRIGA connectors, you must be familiar with the IBM TRIRIGA
Application Platform builder tools, the IBM TRIRIGA Connector for Business
Applications web interface, and the Java programming language. A connector can be
implemented with the Java programming language only.
To access your classes from a Custom task that is loaded through the ClassLoader,
make sure that the following elements are in place.
Step 1 Start your class packages with one of the following three structures. Any other
structure is blocked.
com.tririga.ps
com.tririga.appdev
com.tririga.custom
Step 2 Specify the ClassLoader name followed by a colon in the ClassName field in the
workflow Custom task.
For example, if you have a ClassLoader instance named MyClassLoader and your
entry class is com.tririga.custom.myclassloader.Hello, the value in your
ClassName field is:
MyClassLoader:com.tririga.custom.myclassloader.Hello
When you use this naming convention, workflow can search for your class within the
context of the specified class loader.
To research how a Custom task is implemented and what it offers you, read the
“Custom Task” section of the “Workflow” chapter in the Application Building for the
IBM TRIRIGA Application Platform 3 book. The ClassLoader object only provides an
easy handle to hot deploy and safely manage your Custom task implementations. It
does not add to or change the functionality of a Custom task.
The Application Building for the IBM TRIRIGA Application Platform 3 book instructs
you to put your files into the application server lib directory. You can forego that
step, which gets complicated when you have multiple servers. Instead, add the class
loader to the database and have the container intelligently extract and use the
classes.
Step 1 Go to Tools > System Setup > System > Class Loader.
Provide a unique name and the class loader type. In the Resource Files section, add
your classes and form assets, such as HTML, Flash, js, image, and property files.
In each resource file record, upload the file to be used for this class loader in the
Resource File field. A resource file can be used in more than one class loader.
A good rule of thumb is to prefix library names with an abbreviation of the class
loader and the real name of the library.
If you are uploading a .jar file, only the .class files are loaded into the class path. If
you have many assets (for example, html files, js files, and image files), you can
collect them into a compressed file and upload them as a single file. You also can
upload a file individually, such as a configuration file, so that you can more easily
modify it.
Development Mode
In development mode, you can change files and see your changes with a refresh of the
page without uploading them to the ClassLoader object.
The file types that you can change include HTML, JS, flash, and images.
Attention – If you clear the Development Mode check box, the system pulls the latest files
from the class loader and can overwrite your work.
The following diagram shows how the Servlet Proxy works on the server:
Servlet Proxy
Setting up your Servlet Proxy to render correctly and pass through your code
Step 1 Create a Java class in the package com.tririga.custom and implement the
com.tririga.pub.adapter.IConnect Java Interface.
The com.tririga.custom package is the only package that you can use to create
an implementation class. It must be unique.
This example has a handle to a TririgaWS interface class. This class is the Java
interface for the IBM TRIRIGA Connector for Business Applications API. The example
also shows a basic request and response that you would normally have in a Java
servlet.
Step 2 Continuing with the example, add the following code to the execute method where
it says your code goes here.
PrintWriter out = response.getWriter();
try{
response.setContentType("text/html");
out.println("<html><head></head><body marginwidth='0' marginheight='0'
style='margin:0;padding:0;border:0;'>");
out.print("Hello World");
out.println("</body></html>");
out.flush();
} finally {
if(out!=null)out.close();
}
Step 3 Compile this class and add it to a .jar file named MyFirstConnector.jar.
Step 4 Go to Tools > System Setup > System > Class Loader and click Add.
The ClassLoader name and the Java class that implements IConnect must have the
same name. You can have only one IConnect implementation class per ClassLoader
object. In the example, this class is named MyFirstConnector, and that is what you
must name your ClassLoader instance.
Step 5 Add a new Resource File record and upload your MyFirstConnector.jar file. Click
When you modify, add, or remove a resource file from a ClassLoader record, a
workflow runs that increments the revision number. A change to this revision number
tells the IBM TRIRIGA Application Platform to reload this ClassLoader record.
Accessing a Connector
If you have configured your example MyFirstConnector Servlet Proxy correctly, it is
available at the following URL:
http://<yourserver>/html/en/default/rest/MyFirstConnector.
You must have a valid login to access this URL. The easiest way is to add this URL to
an external link section for a portal section or within a custom tab external link
within any form.
All access to your Servlet Proxy is from this base URL. When you run the base URL in
this example, your screen displays the words, “Hello World.”
To access the files within your ClassLoader resource files, append the word resource
to the base URL, followed by the path to the resource you want to load. For example,
if you have an image as a resource file named helloWorld.jpg, you can load this
image dynamically with this URL: Error! Hyperlink reference not valid.
Alternatively, you can access the ClassLoader resource files directly from the Java
class by using this.getClass().getClassLoader().getResource(),
this.getClass().getClassLoader().getResources(), or
this.getClass().getClassLoader().getResourceAsStream(). For example,
you might use this method for reading a properties file.
The server checks to see whether this resource is loaded. If not, it pulls the file from
the binary Resource File field on the Resource File record and places it into the
<IBM_TRIRIGA_INSTALL_FOLDER>/userfiles/<ClassLoaderName> folder.
When a request is made, the server checks the cache and matches it to the revision
number in the class loader. If the revision numbers are different, it reloads all the
files that are not part of a .jar file into this directory. Then, it refers to this location
for each subsequent request.
Your files are available on any application server and are automatically refreshed
each time that a change is made. You do not need to bounce the server to refresh the
class loader.
Concern Remedy
How to start platform logging In the Administrator Console, in the Platform Logging managed
object, turn on debugging for the Class Loader and the Servlet Proxy
objects. These logs are verbose and give you a good understanding of
what the server is doing. For information about how to access and
use the Administrator Console, see the IBM TRIRIGA Application
Platform 3 Administrator Console User Guide book.
Simplify debugging of class To add a custom category to the Platform Logging managed object in
loaders and servlet proxies the Administrator Console, add the custom category to the
CustomLogCategories.xml file and restart the server. The
CustomLogCategories.xml file is in the
<IBM_TRIRIGA_INSTALL_FOLDER>/config folder. This method is
preferred because you set it up one time. If the server is restarted,
you can turn DEBUG back on by selecting the check box for your
custom category.
.jar files do not deploy as Do not add multiple instances of the same .jar file to a Class Loader
expected record. For example, one added directly and one contained within a
compressed file. When this occurs, the instance of the .jar file that
is loaded is not predictable.
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other
countries. Consult your local IBM representative for information about the products and
services currently available in your area. Any reference to an IBM product, program, or
service is not intended to state or imply that only that IBM product, program, or service
may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or
service.
IBM may have patents or pending patent applications covering subject matter described
in this document. The furnishing of this document does not grant you any license to
these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte character set (DBCS) information, contact the
IBM Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan, Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
The following paragraph does not apply to the United Kingdom or any other country
where such provisions are inconsistent with local law:
Any references in this information to non-IBM Web sites are provided for convenience
only and do not in any manner serve as an endorsement of those Web sites. The
materials at those Web sites are not part of the materials for this IBM product and use of
those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs and
other programs (including this one) and (ii) the mutual use of the information which has
been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.
The licensed program described in this document and all licensed material available for
it are provided by IBM under terms of the IBM Customer Agreement, IBM International
Program License Agreement or any equivalent agreement between us.
Information concerning non-IBM products was obtained from the suppliers of those
products, their published announcements or other publicly available sources. IBM has not
tested those products and cannot confirm the accuracy of performance, compatibility or
any other claims related to non-IBM products. Questions on the capabilities of non-IBM
products should be addressed to the suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations.
To illustrate them as completely as possible, the examples include the names of
individuals, companies, brands, and products. All of these names are fictitious and any
similarity to the names and addresses used by an actual business enterprise is entirely
coincidental.
Privacy Policy Considerations
IBM Software products, including software as service solutions, (“Software Offerings”)
may use cookies or other technologies to collect product usage information, to help
improve the end user experience, to tailor interactions with the end user or for other
purposes. In many cases no personally identifiable information is collected by the
Software Offerings. Some of our Software Offerings can help enable you to collect
personally identifiable information. If this Software Offering uses cookies to collect
personally identifiable information, specific information about this offering’s use of
cookies is set forth below.
This Software Offering does not use cookies or other technologies to collect personally
identifiable information.
If the configurations deployed for this Software Offering provide you as customer the
ability to collect personally identifiable information from end users via cookies and other
technologies, you should seek your own legal advice about any laws applicable to such
data collection, including any requirements for notice and consent.
For more information about the use of various technologies, including cookies, for these
purposes, see IBM’s Privacy Policy at www.ibm.com/privacy and IBM's Online Privacy
Statement at www.ibm.com/privacy/details in the section entitled “Cookies, Web
Beacons and Other Technologies” and the "IBM Software Products and Software-as-a-
Service Privacy Statement" at www.ibm.com/software/info/product-privacy/.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corp., registered in many jurisdictions worldwide. Other product and
service names might be trademarks of IBM or other companies. A current list of IBM
trademarks is available on the Web at “Copyright and trademark information” at
www.ibm.com/legal/copytrade.shtml.