Vous êtes sur la page 1sur 9

IMPORTANT INFORMATION ===================== Enterprise Edition on Windows ----------------------------This is the first release of DataStage Enterprise Edition on Windows.

Please refer to the document winee.pdf for installation pre-requisites and requirements, and differences between the Windows and Unix Enterprise Editions. This document can be found on the Windows CD image in the following location: \DataStage Client\Docs\Enu\Books\winee.pdf This file is also linked to the html release notes on Client installs and on the Windows CD. In summary, requirements for Enterprise Edition on Windows Server are: - Microsoft Windows Server 2003 - Microsoft .Net 2003 Release 7.5x2 is supported on the Windows platform only. Please ignore all references to Unix platforms in this document and the manual set. Future releases will be supported on all platforms. Oracle OCI 9 Plug-in -------------------The Oracle OCI 9 plug-in has been renamed to Oracle OCI since it is now also com patible with Oracle 10g. Because the plug-in's name is used to determine whether it has been previously installed in an earlier version of DataStage, the DataStage installer will not r ecognize if the plug-in is being upgraded and thus will not preselect the plug-in for ins tallation. To upgrade your Oracle OCI 9 plug-in you should select the Oracle OCI option whe n prompted to choose plug-ins to install. Using Oracle 9i or 10g with DS EE on HPUX ----------------------------------------When using Oracle 9i or 10g with DS EE on HPUX 11.00 or 11.11, you must add the following paths to your SHLIB_PATH before ORACLE_HOME: $DSHOME/java/jre/lib/PA_RISC2.0 $DSHOME/java/jre/lib/PA_RISC2.0/hotspot Compiler configuration for Red Hat LINUX AS 3.0 Users ----------------------------------------------------On Red Hat LINUX AS 3.0, the DataStage server requires version 2.96 of the g++ compiler, as specified in the DataStage Install & Upgrade Guide. To determine the version of the compiler that is currently configured as the default, run the following command: g++ --version If the g++ version is not 2.96, you will need to determine if gcc/g++ 2.96 is installed on your system. To do this, run the Red Hat package manager "redhat-config-packages" and check whether the "Legacy Software Development" package has been installed. If it has not been installed,

you will need to install this package as it contains the gcc/g++ 2.96 compilers. Next, you will need to decide whether the gcc/g++ 2.96 compilers will become the default compilers for all Red Hat LINUX AS 3.0 users or whether their use will be specific to DataStage. To set the g++ 2.96 compiler and linker for DataStage use only, perform the following steps: 1. Start the DataStage Administrator client. 2. Select a project, click Properties and then click Environment. 3. In the Categories window, select Parallel/Compiler. 4. In the Details window, select APT_COMPILER and double click the value field Enter the fully qualified path to the g++ 2.96 compiler e.g. (/usr/bin/g++296) Click on another field to make sure the path entered is accepted. 5. In the Details window, select APT_LINKER, double click the value field, enter the fully qualified path to the g++ 2.96 compiler e.g. (/usr/bin/g++296). Click on another field to make sure the path entered is accepted. 6. Click OK to close the Environment dialog, click OK to close the Properties dialog. 7. Repeat steps 2 through 6 for each project. To set the 2.96 gcc/g++ compilers as the default for all Red Hat LINUX AS 3.0 users, perform the following steps: 1. Change to the /usr/bin directory (cd /usr/bin) 2. move gcc to gcc3 or whatever name makes sense (mv gcc gcc3) 3. move g++ to g++3 or whatever name makes sense (mv g++ g++3) 4. soft link gcc to gcc296 (ln -s gcc gcc296) 5. soft link g++ to g++296 (ln -s g++ g++296) UniVerse Hash File Write Errors (ecase 52243) ------------------------------At DataStage 7.5, failures that occur when writing to files with the Hashed File stage are logged. In previous releases, write failures were ignored and not logged. Change to Final delimiter behavior for Parallel jobs ----------------------------------------------------This affects the Final delimiter format property on Format tab. If you have a subrecord or tagged column that is the last column of the record (at that level, i.e. it won't be the very last column, since its last sub-column will be) then the previous behavior was that even if Final delimiter was set to "none" a Delimiter (defined in the property of that name) was inserted at the end of the subrecord/tagged column. This behavior was incorrect.

The new default behavior is for this not to happen unless subrecord or tagged column is also a vector, in which case the previous behavior applies. The new default behavior can be reverted to the previous behavior by setting the APT_PREVIOUS_FINAL_DELIM_COMPATIBLE to 1 in the Administrator client. Note that this will affect all job runs. To set this environment variable for a specific job only, first define the environment variable in the Administrator client with a default of empty-string. This will ensure that the environment variable is defined for the project, but not set and will therefore not affect the normal behavior. A job can then be set to use the previous behavior by using a job parameter tied to this environment variable, setting the default value to 1. Optimized variable length string handling - filesets need re-generating ----------------------------------------------------------------------Optimizations have been made to the variable length string handling in the Parallel engine. This change is not backward compatible, in that lookup filesets created with version 7.0 or earlier of Enterprise Edition (or Parallel Extender) will have to be re-generated. Filesets created with DS 7.0.1 or later will not need to re-generated with this release. Parallel Jobs must be re-compiled after upgrade (from pre-7.0 releases) ----------------------------------------------If you are installing DataStage Enterprise Edition (previously DataStage XE Parallel Extender), some parallel jobs must be re-compiled after the upgrade is complete. Due to changes in the DataStage Enterprise Edition, all parallel jobs which use a Transformer must be re-compiled after the upgrade is complete. Compilation must be done from a DataStage client. A new feature exists in this release to aid in multi-job compilation: the Batch Job Compilation Wizard, from the "Tools -> Run Multiple Job Compile Tool" menu option. This wizard allows batch compilation of DataStage jobs. There is also a command-line (client-side) interface. See the Compiler Wizard section of the DataStage Designer Guide for additional information. New Installation & Configuration Guide for Plug-ins --------------------------------------------------This release sees the introduction of the DataStage Plug-in Installation and Configuration Guide. This new guide is located in the documentation installed with the DataStage Client or on the installation CD in the DataStage Packages\Plugin Stages\Docs directory (Windows) or the packages/doc directory (Unix). XML Pack 2.0 -----------"Ascential Packaged Application Connectivity Kit for XML (Ascential PACK for XML)" provides XML manipulation capabilities to Ascential DataStage jobs. Ascential PACK for XML allows DataStage jobs to incorporate XML-specific Stages, including the XML Input and Output Stages, for reading and writing XML 1.0 compliant documents with XML Schema validation, and the XML Transformer Stage, for applying Extensible Stylesheet Language Transformations (XSLT) to XML data. Ascential PACK for XML also includes an XML Meta Data Importer for parsing XML data structures into easy-to-use Meta data definitions. As an add-on component to DataStage, Ascential PACK for XML includes everything needed to incorporate XML Stages into DataStage jobs, including XML parsers, and XSL and XPATH engines. Future XML Development with DataStage :

Existing DataStage jobs using the XML Pack 1.0 will continue to work with the previous XML Pack. However, to take advantage of the new capabilities of XML Pack 2.0, existing applications should be re-designed using the stages in the new XML Pack. All new development going forward should use XML Pack 2.0. Backward Compatibility : XML Pack 2.0 and higher is not backward compatible with the previous version of XML Pack 1.0. Consequently, table definitions created by the previous version of the XML Pack cannot be reused with the new one. New table definitions must be created using the new XML Meta Data Importer (installed by default). No automatic migration is provided between the XML Pack 1.0 and XML Pack 2.0 as each pack supports a different type of XML meta data. The 1.0 Pack supports XML DTD's which are strictly an XML legacy definition while the 2.0 pack supports XSD's. Consequently, we recommend you use a third party tool, e.g. XMLSpy, to generate an XSD compliant definition from your old DTD. Both XML Pack 1.0 and 2.0 can be used simultaneously in the same job. Existing jobs built with the XML Pack 1.0 can still be used without any modifications but require the installation of the XML Pack 1.0 which is not installed by default. Installation : XML Pack 2.0 is the default XML interface installed with this release of DataStage (Meta data importer and XML Stages). XML Pack 1.0 is not installed by default but can easily be added - it is located in the XMLPack1.0 folder in the root directory of the DataStage Installation CD. Note that the DataStage clients are installed with a new XML Meta Data Importer (targeted at XML Pack 2.0) which is not compatible with XML Pack 1.0 stages. To import XML table definitions which are compatible with XML Pack 1.0, use the XMLImporter.exe program located in the installed client directory. Note that the new XML Pack 2.0 is not currently available on HP/Compaq TRU64. Time Based Job Monitoring - Intermittent Problems ------------------------------------------------Intermittent problems have been observed while running jobs on the Parallel canvas when time based job monitoring is enabled (the default). Time based job monitoring can be disabled in favor of size based job monitoring. This is done by unsetting the APT_MONITOR_TIME environment variable and setting the APT_MONITOR_SIZE variable to a suitable number, e.g. 1000000. This will cause the job to update row count information every 1000000 rows. The environment variables can be set in the Project Properties (in the Administrator) - this will affect all jobs. Alternatively, they be set for an individual job using the Job Properties screen in the Designer. New UniData 6 stage available for Server jobs (not Linux or Tru64) --------------------------------------------The new UniData 6 stage allows access to IBM UniData 6 databases. The existing UniData stage is still supported for access to pre-UniData 6 databases. In order to use this stage, a patch release of UniData 6 is required (for all platforms), as follows:

Platform Windows Aix HP Solaris

Unidata Server Unidata Client 6.0.7 6.0B 6.0.8 6.0B 6.0.9 6.0B 6.0.9 6.0B

This stage is not currently supported on Linux or Tru64; it may be made available on these platforms at a later date. Details of configuration can be found in the DataStage manual "Accessing UniVerse and UniData Databases from DataStage". Change to default port number for UniData 5.2 (HP-UX only) ---------------------------------------------------------The default port number used for connection to UniData 5.2 has been changed to be 4343 on HPUX (this is the value currently used by all other platforms). In the event that customer installations need to use the old port number (4344), it can be changed by setting the UDTINI environment variable. The value of UDTINI uses the same format as if the user has set up a server entry in the uniapi.ini file. i.e. it consists of three parts separated by commas. <PROTOCOL>, <internetAddress>, <portNumber> e.g: 13,127.0.0.1,4344 Add this environment variable into the dsenv file in the DSEngine directory next to the other UDT variables and restart the datastage services. e.g: if [ -z "$UDTHOME" ] then UDTHOME=/Ascential/DataStage/ud41; export UDTHOME UDTBIN=/Ascential/DataStage/ud41/bin; export UDTBIN UDTINI=13,127.0.0.1,4344; export UDTINI fi Change to operation of dsjob on Unix -----------------------------------dsjob on Unix now outputs the return code to the standard error stream in all circumstances (this is true of the Windows version as well). On Unix the return code will be 255 if the real return code was negative or greater than 254. If the return code is 255, then to access the 'real' return code users must capture the stderr stream and process the text output. DataStage Reporting Assistant ----------------------------No further enhancements are planned to the DataStage Reporting Assistant, as this functionality is now provided with MetaStage. SyncSort option removed from Parallel Sort Stage ------------------------------------------------

The Parallel Sort stage no longer supports the SyncSort sorting utility via the "Sort Utility" property. Either the DataStage or Unix sort utility must be used instead. Oracle 8.1.6 on AIX no longer supported by Oracle OCI8i plug-in --------------------------------------------------------------Oracle 8.1.6 on AIX is no longer supported by the Oracle OCI8i plug-in at this release. This is due to a conflict of library symbols between the Oracle client library and the Java runtime library used by DataStage. In this situation we recommend that you upgrade to the Oracle 8.1.7 client. Disabling Use of Shared Memory on Tru64 5.1A (Enterprise Edition) --------------------------------------In order to improve performance of the Parallel Engine, it is recommended that use of shared memory be disabled on Tru64 5.1A. This can be done by setting the APT_PM_NO_SHARED_MEMORY environment variable using the DataStage Administrator client. This variable should be set to 1. Sybase 12.5: Sybase BCP Load and Sybase OC stages ------------------------------------------------At DataStage release 7.0 the SybaseOC stage was rebuilt against version 12.5 of the Sybase Open Client. At DataStage release 7.0.1 the BCPLoad stage was also rebuilt against version 12.5 of the Sybase Open Client. If you are using an earlier version of the Sybase Open Client you will need to upgrade. Please note that this version of the Sybase Open Client is compatible with earlier versions of the Sybase ASE server. Environment configuration for Informix CLI stage -----------------------------------------------Testing has shown that it is not possible to configure a Unix environment to connect to both Sybase ASE and Informix IDS databases at the same time. This means that you cannot construct a job that contains both the Sybase OC stage and the Informix CLI stage. In addition, some specific configuration of the environment needs to be done for the Informix CLI stage to connect to an Informix IDS database. You must ensure that the following files are configured correctly: /etc/hosts /etc/services $INFORMIXDIR/etc/sqlhosts $DSHOME/.odbc.ini $DSHOME/../Projects/<Project Name>/uvodbc.config Also, in the $DSHOME/dsenv file, you must verify that the environment variables are set appropriately: INFORMIXDIR, INFORMIXSERVER, INFORMIXBIN, INFORMIXC, THREADLIB. The PATH environment variable should have $INFORMIXDIR/bin appended to it. The environment variable LIBPATH (LD_LIBRARY_PATH on some platforms) should have $INFORMIXDIR/lib, $INFORMIXDIR/lib/cli and $INFORMIXDIR/lib/esql appended to it. Ensure that these three are after the DataStage directories in LIBPATH (or LD_LIBRARY_PATH). AIX *** Here is an example of the settings in the $DSHOME/dsenv file for AIX:

#Informix sdk 2.8 UC2-1 # INFORMIXSERVER=<server name>; export INFORMIXSERVER # INFORMIXDIR=<Informix directory path>; export INFORMIXDIR LIBPATH=`cat /.dshome`/lib:$LIBPATH:$INFORMIXDIR/lib:$INFORMIXDIR/lib/cli:$INFOR MIX DIR/lib/esql; export LIBPATH INFORMIXC=CC; export INFORMIXC THREADLIB=POSIX;export THREADLIB PATH=$PATH:$INFORMIXDIR/bin; export PATH NOTE: There is a line break in the LIBPATH definition for the sake of readabilit y. The protocol onsoctcp is used. Solaris ******* On Solaris, when Parallel Extender is installed some additional configuration needs to be performed to avoid conflicts. Below is an example $DSHOME/dsenv file. Note the sequence of entries in the LD_LIBRARY_PATH: #Informix sdk 2.8 UC1 # INFORMIXSERVER=<server name>; export INFORMIXSERVER # INFORMIXDIR=<Informix directory path>; export INFORMIXDIR INFORMIXBIN=$INFORMIXDIR/bin; export INFORMIXBIN LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$INFORMIXDIR/lib:$INFORMIXDIR/lib/cli:$INFORMIX DIR/lib/esql:$APT_ORCHHOME/lib:$APT_ORCHHOME/DSCAPIOp:$APT_ORCHHOME/osh_wrappers :$APT_ORCHHOME/usr_osh_wrappers:$APT_ORCHHOME/etc; export LD_LIBRARY_PATH INFORMIXC=CC; export INFORMIXC THREADLIB=POSIX;export THREADLIB PATH=$PATH:$INFORMIXDIR/bin; export PATH NOTE: There is a line break in the LD_LIBRARY_PATH definition for the sake of readability. The protocol should be set to ontlitcp Environment configuration for the Oracle OCI Load stage ------------------------------------------------------On HP-UX the order of the DataStage libraries in the SHLIB_PATH environment variable needs to be changed for the Oracle OCI Load stage as follows: SHLIB_PATH=$DSHOME/lib:$SHLIB_PATH The DataStage server libraries need to be loaded first so that the stage uses the Oracle client library (libclntsh.sl) which has to be generated according to the instructions in the Datastage Plug-in Installation and Configuration Guide. A similar library path order is needed on Tru64. Solaris Installation -------------------A problem has been encountered on multi-processor Solaris systems which causes sporadic failures at DataStage installation time and when running jobs. This issue was fixed by Sun for bug 4335956. The fix can be found in the following

Solaris patches: 106980-19 (Solaris 2.7), 108827-26 (Solaris 2.8). It is recommended that you install the patch relevant to your system before installing or upgrading DataStage. Solaris 2.7/2.8 Installation ---------------------------The following patches are required for Parallel Extender installations. They fix an error about "Crun::exit_register()" being missing. Solaris 8: Patch 108434/12 or later http://sunsolve.sun.com/pub-cgi/findPatch.pl?patchId=108434&rev=12 Solaris 7: Patch 106327/12 or later http://sunsolve.sun.com/pub-cgi/findPatch.pl?patchId=106327&rev=12 Solaris 2.9 Installation -----------------------In order to install this release on Solaris 2.9, a patched version of cpio is required. A T-PATCH is available from Sun for this and is referenced as T113280-03. Without this patch, the install fails before any prompts are displayed, and the log details are as follows:============================================================ Executing setup to obtain installation requirements Thursday May 8 18:03:46 BST 2003 ============================================================ cpio: Cannot open/create object/repos/APM.CHECK.FILE.B, errno 13, Permission d enied cpio: Cannot open/create object/repos/APM.GET.JOB.DEPENDENTS.B, errno 13, Perm ission denied cpio: Cannot open/create object/repos/DSRMapTbl.B, errno 13, Permission denied AIX Installation ---------------If you do not have IBM VisualAge C++ Compiler Version 6.0 installed, you must install the appropriate runtime library filesets before installing DataStage. If you are using Enterprise Edition and therefore using the VisualAge C++ compiler, you must make sure the appropriate runtime libraries for the specific version of the compiler are installed on all systems to which Enterprise Edition jobs will be deployed. The runtime library filesets can be found as follows: The VisualAge C++ for AIX V6.0 Runtime Libraries can be downloaded from the IBM website at the following address: http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg24001467 TRU64 Installation -----------------After mounting the CDROM, if the file names do not appear in uppercase followed by a ;1, you should re-mount the CDROM device using the -o ISO9660 option. For example: mount -r -t cdfs -o ISO9660 /dev/disk/cdrom0c /cdrom If you are installing this release of DataStage on an HP/Compaq Tru64 system you MUST install the Compaq C++ Run-Time Library first. The library is provided as part of a redistribution kit, details

of which can be found on the Compaq web site at the following location: ftp://ftp.compaq.com/pub/products/C-CXX/Tru64/cxx/cxxredist.htm The redistribution kit is supplied in the form of a tar file and can be downloaded from the following address: ftp://ftp.compaq.com/pub/products/C-CXX/tru64/cxx/ It is recommended that you always download the latest redistribution kit. At the present time this is CXXREDIST632V20.tar. Note that DataStage has been built with version 6.3-014 of the library so this is the minimum revision required. HP-UX Installation -----------------From 7.0 onwards, DataStage Enterprise Edition (Parallel Extender) requires the aCC 3.50 compiler on HP-UX with the patch PHSS_29483. This patch can be downloaded from the HP-UX website. Below is information we received directly from HP on how to obtain this patch: The patch PHSS_29483 is available externally now. This patch must be applied to version 3.50 only. This patch will change the version of the compiler from 3.50 to 3.52. The patch is available at: http://itrc.hp.com You may have to register and get an account before you get access to the patches from this site. Registration at itrc is free. Clean Up Project functionality removed -------------------------------------The Clean Up Project functionality has been removed from the Administrator. ********************* End of IMPORTANT INFORMATION ****************

Vous aimerez peut-être aussi