Académique Documents
Professionnel Documents
Culture Documents
Dell Oracle/Linux
Services
By Mahesh Pakala
Global Infrastructure Consulting Services
July 2009
1 EXECUTIVE SUMMARY.............................................................................................................2
7 REFERENCES ..........................................................................................................................27
8 AUTHORS .................................................................................................................................28
1 Executive Summary
This paper explores different migration strategies including third party software products like
Golden Gate’s (GG) transactional data management (TDM) software and Quests’ Shareplex product
for use as a transactional data replication solution and its suitability for availability improvement
during dynamic 9i, 10g & 11g database migration from any combination of Solaris, HP-UX, AIX,
Windows platforms to x86 Dell Linux platforms. This technical reference aims to assist by helping
readers understand the general mechanics of platform migration processes – how to apply
solutions to database migration, and determine the metrics for measuring the product’s capabilities
and scalability under a transaction load. This paper was not intended to be a full assessment of the
GoldenGate or Shareplex product or a comparison to any other similar product.
Many customers cannot tolerate the length of downtime (an offline database that is unavailable for
application users and transactions) required for a totally static (off-line) database migration. This
paper addresses the issue of the essential need to capture real-time transactional data on a
production Solaris/HP-UX/AIX/Windows platform and subsequent replication to a new Dell Linux
Servers where any interruption of service is required to be at zero. The knowledge gained from this
paper will assist in formulating strategies and methodologies to achieve near-zero downtime
dynamic database migrations to Oracle10g or Oracle11g.
This paper explains how to use various Oracle tools including third party tools such as
GoldenGate’s TDM or Quests Shareplex migration solution for migrating to Oracle Database 10g
Release 2 and Oracle Database 11g on a new Linux platform and complements the other Maximum
Availability Architecture(MAA) best practice papers that can be found on the Oracle Technology
Network.
The complexity of moving data between disparate platforms has been a barrier to exploiting data
assets in this way. Different platforms have incompatible volume (virtual storage device) metadata
formats, file system metadata formats and data formats in application files.
To transfer data between platforms, IT departments have had to choose between network (FTP)
copies, and the copying of data to tape on the source platform with the restoration of tape on the
target platform
Both options consume significant time and resources. As a result, many IT departments continue
to run applications on less-than-optimal platforms as migration to more suitable environments is
believed to be too resource-intensive. Others forego the business benefits of off-host backup, data
mining, and testing with live data because they believe that copying large data sets will result in
unacceptable application downtime.
In effect, each data set becomes captive to the server platform that processes it. A great deal of
enterprise data is stored in relational databases, therefore an ability to move databases between
disparate platforms would be especially beneficial. But consider, for example, the case of copying
of an Oracle database on a Solaris platform to a Linux platform. The Linux platform cannot
interpret Solaris volume metadata. The two platforms’ file system formats are also incompatible.
Finally, the platforms’ endian formats (the way in which multi-byte data items are interpreted)
differ, so Oracle instances on the two platforms cannot interpret each others’ data formats.
To migrate a database from a Solaris host to a Linux one, an administrator would have to:
• Stop application processing on the Solaris platform and shut down the database (so that a
business-consistent database image is available).
• Export data from the Oracle database into disk or tape files.
• Copy the exported data from Solaris to Linux, either by FTP or by tape exchange.
• Create an empty “target” database on the Linux platform.
• Import the exported data into the receiver database on the Linux platform.
Creating a business-consistent (unchanging) copy of a large database can mean hours of
application idle time. It’s not surprising that IT departments are reluctant to adopt operational
procedures that include moving databases between disparate platforms, despite potential business
benefits.
The migration process involves several challenges which need to be addressed effectively, a few
these are -
• Cultural Training.
• Technical Training.
• Application Migration.
• Database Migration.
• Infrastructure Requirements.
o High Availability.
o Disaster recovery.
o Back & Recovery.
o Monitoring.
Fortunately, Linux is similar enough to the other flavors of UNIX to allow for experienced System
Administrators to make the move to Linux with minimal disruption. There are numerous sources of
training (including Dell) available to assist with the transition.
There are a few products available in the market to help migrate both your database and
application to the Oracle platform. These products are the Oracle Migration Workbench (Migration
Workbench), Application Migration Assistant and Oracle Platform Migration Assistant
Migration Workbench
The Migration Workbench is an Oracle tool that simplifies the process of migrating third-party
database systems to the Oracle platform (Oracle10g and Oracle11g Database). The Migration
Workbench migrates your entire Microsoft SQL Server database schema, including triggers and
stored procedures, in an integrated, environment.
As an example, the migration of an Oracle E-Business Suite database tier to Linux can be achieved
using the process discussed below in the database migration section. If the endian formats are the
same, the additional information can be found in the following Oracle MetaLink documents:
Java/JSP/Servlet Applications
Migration of Java-based server applications to Linux is a straightforward process. The JDK required
to run Java on Linux is readily available, and will need to be downloaded and installed on the Linux
application servers in target. The platform independent nature of compiled Java class files already
allows them to be compiled on one platform and deployed to another, so the migration of Java
application code across platforms will not generally have a large impact.
In situations where Java-based Web applications are moving non-Oracle systems to Oracle on
Linux, connectivity to the Oracle database needs to be considered. Oracle supplies a set of JDBC
drivers and documentation for the development of Java based Oracle applications.
IIS/ASP Applications
ASP applications can be migrated to Java with the use of the Oracle Migration Kit for ASP. This is a
free utility that migrates your proprietary ASP applications to industry standard Java code,
compatible with the Oracle Application Server and can be run on any platform with JDK support,
such as Linux.
Having the right tool to assist with testing, tuning and optimizing code early on in development is
essential to delivering high-quality, high-performing Java application code to production. The
Quest’s Java performance tuning solution, JProbe will not only help development stay on schedule,
remain on budget and satisfy all project requirements, but will also promote best practice for code
quality.
By default, features and functionality provided by the Oracle database are platform generic, so
there is no issue or loss of functionality when migrating platforms. Data can be migrated by using
any of the following options :
Starting with Oracle Database 10g Release 2, you can use Transportable Database. Note that if
the endian format is different between the source and target platforms, then you
cannot use Transportable Database. Instead, use a different method, such as cross-
platform Transportable Tablespaces (Different Endian formats). If migrating to a platform
that is the same endian format as the source platform (e.g. moving from a little endian platform to
another little endian platform), the recommended migration method is Transportable Database
(TDB). TDB can simplify the platform migration process.
We can check whether a tablespace can be transported and if it is self contained, run:
dbms_tts.transport_set_check(<ts name1.,<ts name n>,……,true);
This is going to populate a table called transport_set_violations. This table is owned by the user
sys. To query the table, either precede the tablename by 'sys.' or create a synonym:
The contents of table transport_set_violations are only retained for the session. Connecting as
another user or disconnecting from the SQL*Plus session loses the contents of
transport_set_violations.
Remove all violations. Some of the violations may include materialized views. These should
be dropped and recreated manually after the tablespaces are transported.
RMAN is using is combination with transportable tablespaces for migrating to a target server.
If converting the datafiles at the source and then ftping the files to target then we have to convert
the Oracle datafiles using RMAN prior to transporting to new TARGET=DELL Platform :
The converted datafile is staged in /tmp/stage directory until it is copied to the target DELL server.
If converting using RMAN at the target using NFS mounted drives, then –
run {
allocate channel d1 device type disk;
allocate channel d2 device type disk;
allocate channel d3 device type disk;
CONVERT DATAFILE
‘database/db101/V101/datafile’
FROM PLATFORM 'HP Tru64 UNIX'
PARALLELISM 3
DB_FILE_NAME_CONVERT
' /database/db101/V101/datafile', '+DATA_DG' ;
The biggest advantage of a design based on NFS is that it eliminates the need for having
temporary space on the target side equal to the amount of data being moved. That is, if an ftp
approach is taken, the large TB’s amount of data would have to be moved via ftp to a equally large
holding area on the target and the large data would then have to be endian-converted to the
permanent large TB’s location.
RMAN cannot do endian conversion "in place." Most environments simply do not have an extra
large TB’s ready to use as temporary space, so the ftp to temporary space is often not feasible.
The other advantage is that it makes the process a single step. If a mechanism such as ftp were
used, moving the files would be one step and the conversion would be the second.
For additional details, see the MAA best practice white paper Oracle Database 10g Release 2 Best
Practices: Platform Migration using Transportable Database.
Oracle Streams enables the propagation and management of data, transactions, and events in a
data stream either within a database, or from one database to another. The source and target
database can be on different operating systems but to enable this type of data migration the
Oracle versions have to be 10.2.0.4 upwards.
Oracle Streams consists of three components: capture, propagation, and apply. Each of these
components is made up of several Operating System processes, as shown in Figure
Ensure that all Streams databases are running Oracle Database 10g Release 2 (release 10.2.0.4 is
recommended) and apply any required critical patch sets.
Logical Change
Records
Redo Not grouped
into Trxs LCR
Records
REDO Data LCR
Read Prep Build Capt
Source er arer er ure :
Database Streams
Pool
Capture at Source, Downstream or Target Propagation
Database
Apply at Target
Database LCR
Target Appli Coordina Read LCR
er tor er
Database :
Streams
Committed Trxs
Conflict Detection Transactions Pool
Grouped and sorted
Errors Handling To be applied
In dependency order
Custom Code
This approach of database migration using streams is very complex; it has extensive environment
setup challenges, learning curve to understand replication/AQ/streams functionality and most
importantly will need extensive manual intervention.
If the time requirements for database migration using transportable tablespace are not within the
recommended RTP/RPO requirements it is advisable to evaluate third party software products.
For more information refer – the Oracle Database Administrator guides and Oracle MAA site.
Third Party tools such as GoldenGate and Quest’s Shareplex Software can provide faster and easier
Oracle data migration with minimal down time. The key advantage of these tools is that the source
database need not be only Oracle and can be any operating system. This method is explained at a
high-level here and in-depth in Section Four.
GoldenGate’s software has modular components which move transactional data between
heterogeneous databases in sub-seconds. The GoldenGate Capture process reads transaction logs
to identify changed data and only committed transactions are moved. GoldenGate Trail Files store
queued data in a platform-independent format, and the GoldenGate Delivery process applies the
transactions to the target database using native SQL commands. In addition to moving changed
data, GoldenGate can be installed to perform full initial loads to instantiate a database. The
software can also be set up for bi-directional data movement as well.
GoldenGate Manager (not shown in the diagram) provides a command line interface with which to
perform a variety of administrative, housekeeping, and reporting activities, such as the
establishment of parameters to configure and fine-tune GoldenGate processes as well as starting,
stopping, and monitoring GoldenGate Capture and GoldenGate delivery modules.
Figure 1
For a near zero-downtime migration of Oracle databases, the process with GoldenGate is as
follows:
• The GoldenGate Capture process is initiated to capture data changes on a system that
used for remote capture purposes (production system). These captured changes (stored in
Trail Files) are buffered until the target environment is instantiated.
• The database is then exported from production using the consistency = y flag and then
imported to the end target migration server/database. Please note that there’s no need to
use consistent=y if keys are on the tables.
• Once the instantiation occurs at the target, GoldenGate delivery applies buffered changes
from Trail Files to the target LINUX environment.
• GoldenGate configures failback replication prior to user cutover. This provides customers
with a contingency plan against unforeseen problems in the new environment.
The other solution is Quest’s SharePlex which is database replication solution that supports high
availability, reporting, data movement and application integration on Oracle databases. With
SharePlex, you can similarly:
• Ensure high availability and disaster recovery
• Eliminate risks associated with migrations
• Improve the performance of OLTP systems
• Optimize business intelligence applications
SharePlex employs a streaming process outside of the database instance. This ensures a very small
footprint and minimal impact to database performance and network capacity.
SharePlex also works across multiple operating environments and different versions of Oracle. It
offers 24x7 unattended monitoring of your enterprise environment for dramatically reduced
downtime.
Migration of a database platform from UNIX to Linux requires the consideration of a few additional
aspects such as:
• Performance Assessment and Testing
• Infrastructural : Functional & technical Aspects
• Migrating to a new release: Taking advantage of new features
• Tuning on Linux
• Real Application Clusters
In order to assess and test the performance of a database system on completion of migration to
Linux, it is important to understand and quantify workload and performance characteristics on the
existing(source) platform. If these factors are not well understood and quantifiable, it is difficult to
size hardware and manage the migrated system on Linux. With a realistic understanding of what is
to be evaluated, an analysis of the performance of the Linux-based solution can be undertaken.
A variety of commercial applications in addition to Oracle’s RAT are available that can be used to
generate a database workload and record throughput for tests of concurrency.
We can use Benchmark Factory (BF) a tool provided by Quest Software that allows us to load
Oracle database with either industry standard benchmarks or real transactions drawn from our
applications or Oracle trace files. BF tool allows to the simulation of the load from multiple end-
users and to crank-up the volume of active users in distinct steps. This assists us to tune the
system after migration is complete and understand system production load profiles. In addition,
Oracle tools such as AWR and ASH reports can be used to observe various internal Oracle metrics
(wait events from ASH, AWR reports) and external metrics (CPU, RAM, disk and network
enqueues).
It is important to assess the infrastructure requirements of the current UNIX platform prior to
migrating to Linux by covering functional areas such as:
• High Availability
• Disaster Recovery
• Backup and Recovery
• Monitoring
Tools and applications used to accomplish these tasks on UNIX hardware need to be evaluated on
the current platform. A pathway should then be determined whereby equivalent functionality can
be delivered on Linux. In some cases, the existing tools supplied by ISV’s(e.g: Oracle Applications,
WebLogic, Siebel etc.) on RISC-based UNIX hardware will be available on Linux and suitable for
use with minimal disruption. In other cases, the move to Linux will require a different approach to
achieving business requirements, such as high availability.
A platform migration to Linux requires a different model to ensure high availability. Fortunately,
Oracle products such as Oracle Database with Real Application Clusters, and Oracle Application
Server provide the functionality to failover connections, and route requests to available machines.
This functionality can allow the deployment of clusters of smaller Dell servers running Linux. This
can provide insulation against an outage should any one node fail. Hardware infrastructure is more
fully utilized in this model, as the requirement for an entire monolithic “hot standby” system is
removed.
3.1.5.2.2 Backup and Recovery
Prior to migrating a database environment, a thorough assessment of the business requirements
(legal and operational), Recovery Point Objectives (RPO), and Recover Time Objectives (RTO)
should be performed. The assessment should also include the technologies to be used (native tool,
3rd party tools, etc.) and determine the components requiring protection (database, application
software, custom code, etc.) For backups of the Oracle database, RMAN is platform independent
and most 3rd party media manager support it’s API.
3.1.5.2.4 Monitoring
If using Oracle Enterprise Manager for management and performance monitoring of Oracle
products, there is no impact to the migration to Linux as the monitoring tool has all the same
functionality and is certified for use on Linux out-of-the-box. If 3rd party tools are being used for
systems monitoring and management, their availability on Linux will need to be determined. There
are also minor differences in the standard diagnostic utilities on Linux, relative to those on other
UNIX on RISC-based platforms.
Dell servers include OpenManage TM, Dell's lifecycle systems management software, which
provides seamless management of Dell servers, storage systems, network switches, desktops and
notebooks. OpenManage enables enterprises to deploy, maintain and monitor hardware using an
open framework that for easy integration with enterprise management software.
The figure below illustrates the process used to manage Dell servers by using a combination of
OEM and OpenManage. At every layer, this combination can provide monitoring for CPU, memory,
disk and table utilization in addition to many other environment metrics of the running system.
All systems requiring
2 patches identified and
reports generated
Reference catalog
1 retrieved from Dell
FTP site 3 Patches and bundles
retrieved from Dell FTP
Enterprise site and staged in
repository
Manager
Figure 2
3.1.5.3 Migrating To a New Release: Advantage of New
Features
As a database platform migration project creates change to the existing infrastructure and some
level of testing activity, those migrating often take the opportunity to upgrade to the most recent
Oracle Database as part of the migration project. Combining the platform migration with a
database upgrade can save considerable time and testing effort versus separate efforts in serial, as
regression and performance testing need only be performed once. In addition he impact to
database clients is minimized as there is only one production cutover, rather than two for a
migration followed by an upgrade.
There may also be new features of the database that can be easily be taken advantage of within
the scope of the migration to provide immediate, benefits for user and administrators. For
example, migrating from an earlier Oracle release to Oracle11g Release 2 would allow you to take
advantage of improvements in the areas of Advanced compression, Additional Partitioning options,
Real Application Testing (DB Replay), Snapshot standby made easy, Performance features(Query
caching, Adaptive cursors sharing, SQL Plan baseline, Plan Management), ADDM for RAC,
Enhancements to RAC Cache Fusion, Active Data Guard, Automatic Workload Management and
others with little effort.
Figure 3 below illustrates three aspects of the logical architecture relevant to our dynamic data
migration methodology. The first data flow path exemplifies the process of “initial load”
instantiation of the source data into the target database. GoldenGate may be used for this initial
load; alternatively, Oracle export/import utilities or other methods of bulk data transfer can be
substituted as they may provide higher data transfer rates. The essential requirement is to
instantiate target database tables to a consistent point-in-time from which in-flight and
subsequently new transactions continue to be captured and applied by GoldenGate. The second
data path shown in this figure illustrates the transactional data management architecture used for
capturing and delivering in-flight transactions, fundamental to achieving near-zero-downtime
database migrations. Finally, the fail-back data flow path depicts retention of new target database
and old legacy source database in synch as a contingency. GoldenGate also provides capabilities to
verify data consistency between active Oracle databases as shown at the bottom of the diagram.
GoldenGate supports log-based extraction (from Oracle redo logs and if necessary, archive logs).
For certain objects it offers fetch functionality to directly retrieve changes from the tables. For the
purposes of this evaluation, log-based extraction was used. Log-based extraction enables it to
obtain specific point-in-time transactions from the Oracle system change number (SCN).
The GoldenGate Capture component reads committed data transactions and writes these to
GoldenGate Trail Files, which are stored on disk in a proprietary universal format, allowing the
ability to accumulate continuous activity for sub-second delivery and the application of this activity
to the target database. The universal format of Trail Files enables transactional data management
across various operating systems and different database vendors.
Original source transaction ordering is preserved on the target as the transactions are captured
from the redo logs in SCN order. Only committed transactions are captured and sent to the target
database, reducing network transfers and eliminating the need for the apply process to rollback
uncommitted transactions. GoldenGate checkpointing helps to ensure that in the event of a failure
during migration, the migration process can be automatically restarted from the failure point. The
GoldenGate Capture component can access Oracle’s archive logs to maintain point-in-time data
accuracy to handle any process failures. Additionally, GoldenGate is equipped to automatically
handle collisions of duplicate or missing records when replicating on the target, as may occur
during the overlap of time between the datapoint when ongoing transactions are extracted and
held for later replication, and bulk data transfer is used to populate the target database.
This is a simple process of setting up the GoldenGate software which is unidirectional. This does
not include the advanced features such as parallelism, bi-directional replication and failback
options etc.
Preparations:
1: Verify Initial Load Method
• There are a number of options for the initial load method using GoldenGate and Oracle’s
database utilities. We recommend using Oracle Export/Import utilities to instantiate the
target database.
We recommend creating a database user that is dedicated to the GoldenGate processes. It can
be the same user for all Capture and Delivery processes.
4.1.2Quest Shareplex
The other reliable option for data migration is to use Quest’s Shareplex solution. SharePlex can
capture a modification to selected objects immediately, as soon as it is written to the Oracle log
and even before the transaction is committed.. SharePlex completely complies with the Read
Consistency Model, so that target instances are accurate representations of the source database.
Archive
Logs
Capture
Reader
Online Process
Redo Logs Q
Reader
Process Export
Export
Q Proces
Source: s
Oracle9i
Quest
Transport Network
Target:
Oracle11g Layer
Post Import
Process Post Process
SQL Q
4.1.2.1 Capture
The SharePlex capture process gathers changes from the production database. In capture process:
• The capture process reads from the Oracle redo logs
• The network transports only changes to replicated objects, (not the rest of the database
housekeeping information contained in the redo logs)
• Database resources are not required to capture and move the data. A small repository
does reside in Oracle to record information such as when replication started/stopped, etc.
• The capture process within SharePlex can read the online redo logs, go back through
archive logs and even prompt for archive logs which have been taken off line to secondary
storage. It is this capability that adds to the fault tolerance. For example, if the capture
process is terminated for some reason, it can go back in time.
The capture process resides on the source system, reading the online redo logs automatically
generated by Oracle. These reads of the redo log are completed as file system reads and not
through the database. By using the redo logs as our source of change information, Quest is able to
replicate the changes to the production database without incurring additional overhead on that
instance. Oracle uses the redo logs for their database recoverability. As a result, the redo logs are
a reliable source of change information, whose format is fairly stable. Quest has reverse
engineered the format of the redo logs starting with the Oracle 6 release (that version is now
retired but SharePlex supports replication of subsequent Oracle versions starting with the 7.3.4
release), and Shareplex is now able to handle the new format of oracle 9i, 10g and 11g redo logs.
The capture process monitors the redo logs for changes continually. When a record appears in the
redo logs, SharePlex determines if it is for an object to be replicated. If it is, SharePlex adds
addressing information such as to which hosts the change should be sent and then puts the
change information into its queues. The queues reside outside the database. The data is
immediately processed and transported to the target systems without waiting for a commit or a
rollback, which would introduce unnecessary latency. (When a commit or rollback appears in the
redo logs, it is subsequently sent, and the transaction is appropriately completed on the target
systems.)
4.1.2.2 Transport
SharePlex uses its own network protocol combined with TCP/IP to transport data between the
source and target systems. The process confirms the receipt and the appropriate order of the
network packets, providing fault tolerance for network interruptions while ensuring data integrity
and completeness. No additional middleware is required to transport the data.
4.1.2.3 Posting
The SharePlex post process converts the change information into SQL statements. The SQL
statements are then applied to the open target instance using standard SQL*Plus connections.
Because SharePlex updates the target instances using standard SQL like any other application, the
concern about its supportability from Oracle is unwarranted.
Part of the key to SharePlex' accurate replication is its ability to maintain Oracle's read consistency
from source to target, not only replicating the order of the transactions, but also their context.
SharePlex replicates the combination of updating transactions to the target as they occurred on the
source, so that the resulting replica is reliable for disaster recovery. To accomplish this, SharePlex
creates connections to the target database to mirror the updating connections on the source
system, so that transactions can be applied to the target instance in parallel, as they occurred on
the source.
5 Summary
The proven Oracle utilities and tools such as Oracle DATAPUMP, DATA EXPORT, and Cross
Platform Transportable tablespaces are effective methods of migrating the data across platforms
with different endian formats. But incase large enterprise have a near ZERO downtime SLAs for
their Oracle Database applications and need a very effective failback solution, then its
recommended to use third party tools such as GoldenGate or Quest Software for migration.
It is also recommended to take advantage of this migration process by implementing newer Oracle
technologies and features. The key features to implement would be to include higher availability
architectures such as Oracle RAC, upgrade to the latest Oracle software like clusterware and ASM ,
Linux OS version and implement ASM .
Audience:
Technical Architects, Lead Designers, Business Analysts, Lead Developers, Lead DBAs.
Contact Information:
Name ___________________________
Title ___________________________
email ___________________________
Phone ___________________________
Application :
Name of the Application ( Abbreviation):
What types of users access your application? ( Back office, front office, self service, general public ):
How much customization does your application require for each new implementation?
How do you handle application and infrastructure version changes for your customers?
Does the Application support databases from multiple vendors? Please specify.
What is the primary development, deployment platform for the application server:
Does the Application support application servers from multiple vendors? Please specify.
What development tools and methodologies do you use to develop your application?
What are the operating systems that your application supports? Please specify.
What other vendors’ products does your application integrate / interface with?
Are there specific considerations within your Application to support Performance, Availability, Scalability and
Serviceability?
Application:
What major changes are you planning for your application in future releases?
What new technologies are you planning to adopt? ( RAC, Web Services, wireless device support, portal, etc )
What additional Oracle technologies do you plan to adopt?
What is the Integration Strategy for the Application ( Enterprise Application Integration, B2B )
CASE Tools used? If so, what specific artifacts are produced? List by
application tier ( database, app server, client etc)
Modeling languages used? (UML etc) If so, what specific artifacts are
modeled?
Messaging System?
Workflow System?
Security Considerations
Standard Outside Authentication (LDAP, JNDI)
Data Access features being used in your current data interface layer.
Data Access method being used. (ODBC, JDBC, SQLJ, ADO, DAO, OCI, PRO*, PL/SQL, etc)
Do you use large objects (e.g. BLOB/CLOB/BFILE) or Object data types or other user-defined data types?
7 References
• Various papers on Oracle (http://otn.oracle.com )
• Various papers on Linux (http://www.redhat.com )
• Migrating from Unix to Oracle on Linux (http://www.oracle.com/linux )
• Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2
(http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_Platform
MigrationTTS.pdf)
• Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2
(http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2
_PlatformMigrationTDB.pdf)
• GoldenGate Solutions and Technology white paper (http://www.goldengate.com )
• Quest : http://www.quest.com
8 Authors
Migrating Mission Critical Applications from UNIX to Linux with Minimal Downtime
July 2009
He works in the GICS (Global Infrastructure Consulting Services) Group of Dell Inc., and assisting large
enterprise customers with HA architectures and solutions. He has extensive work experience in areas of –
engineering, media and technology with companies, such as Oracle Corporation (System Performance Group
& RDBMS Kernel Escalations), Ingres (Computer Associates), Fujitsu and startups like eLance and Grand
Central Communications. He has been a speaker in the areas of High Availability and System Architecture at
various conferences.
Contact for more Information – Mahesh_Pakala@dell.com
Reviewed By: Irem Radzik, Alok Pareek, Sunil Shenoy, Puneet Arora, Jeremy Greening, Ron Piwetz
This document is intended to address migrating a database only. Regardless of the method chosen to migrate to a new
platform, there are additional areas that must be considered to ensure a successful transition to the new platform, such as
understanding platform-specific features and changes in the Oracle, Quest and Goldengate software. Refer to the platform-
specific installation guides, release notes, and READMEs for details.