Vous êtes sur la page 1sur 211

OTECH

MAGAZINE
Sten Vesterli
Patrick Barel Simon Haslam
THE COST
OF TECHNOLOGY
ABSOLUTELY
TYPICAL
#3
MAY 2014
ENTERPRISE DEPLOYMENT
OF ORACLE FUSION
MIDDLEWARE PRODUCTS
PART 2
2 OTech Magazine #3 May 2014
The clicking sounds the rail makes when haul-
ing the carriage up. The moment of silence just
before the screaming of the passengers starts.
The wind in your hair during that frst, breath-
taking decent into the unknown
OTech Magazine is going like a rollercoaster,
just like my personal life. In the past few
months a lot has happened.
In my personal life I got married to the most
wonderful woman in the world (who makes
my life the best place in the world sorry guys).
OTech Magazine has some new and exciting
partners as well. Thanks to Roland (who does
wonders with the graphics of the magazine)
and Rob (who makes sure the commercial part
of the magazine runs) OTech Magazine is
becoming more and more professional.
In my personal life we bought and renovated
a new house. Its large and comfortable, has
enough space for expansion. OTech Magazine,
as you might have noticed by now, also had
quite the makeover. The basis is completely
changed and it ofers us enough room to
expand in the future months: the magazine
will change and improve a bit every issue.
In my personal life I broke my foot. Besides
the pain, theres the agony of the best timing
in the world. But, not everything in life goes
the way we plan. This magazine was supposed
to be released a few weeks ago, but because
of small humps on the road that I mention
above, we didnt make it.
But maybe for the better; this issue of OTech
Magazine certainly turned out mighty fne.
I would like to take this opportunity to thank
my beautiful wife Simone for her patience with
me, Roland for this exciting new magazine look
and feel, Rob for the commercial groundwork,
our partners AMIS and More Than Code, our
sponsors, most of all the contributors (keep
the good stuf coming)
And all our readers. Enjoy the ride!
Douwe Pieter van den Bos
douwepieter@otechmag.com
twitter.com/omebos
www.facebook.com/pages/
OTech-Magazine/381818991937657
nl.linkedin.com/in/douwepietervandenbos/
ROLLERCOASTER
FOREWORD
3 OTech Magazine #3 May 2014
Try out our platform and book a free session for example:
- Migration from OWB to ODI
-Installation of Oracle enterprise manager 12c
4 OTech Magazine #3 May 2014
CONTENT
THE COST OF TECHNOLOGY 6
Sten Vesterli
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING? 13
Talip Hakan Ozturk
ENTERPRISE DEPLOYMENT OF ORACLE
FUSION MIDDLEWARE PRODUCTS PART 2 26
Simon Haslam
ABSOLUTELY TYPICAL 32
Patrick Barel
STEP BY STEP INSTALL ORACLE GRID 1
1.2.0.3 ON SOLARIS 11.1 53
Osama Mustafa
THE RELEVANCE OF THE USER EXPERIENCE 63
Lucas Jellema
ORACLE NOSQL PART 2 89
James Anthony
UTILITY USE CASES ASM_METRICS.PL 102
Bertrand Drouvot
BUILD A RAC DATABASE FOR FREE
WITH VIRTUALBOX 109
Christopher Ostrowski
DINOSAURS IN SPACE -
MOBILIZING ORACLE FORMS
APPLICATIONS 132
Mia Urman
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET PART I 137
Ronald van Luttikhuizen & Simon Haslam
MOBILITY FOR
WEBCENTER CONTENT 144
Troy Allen
ANALYTIC
WAREHOUSE PICKING 150
Kim Berg Hansen
WHAT DOES ADAPTIVE IN
ORACLE ACM MEAN? 163
Lonneke Dikmans
ORACLE ACCESS MANAGER:
CLUSTERS, CONNECTION RESILIENCE
AND COHERENCE 174
Robert Honeyman
QUMU & WEBCENTER -
BRINGING VIDEO TO THE ENTERPRISE 182
Jon Chartrand
ORACLE DATA GUARD 12C:
NEW FEATURES 189
Mahir M Quluzade
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING 204
Peter Lorenzen
Do you want
to switch from
Oracle OWB
to ODI?
We got the
solution!
www.owb2odiconverter.com


We are going to be at the next
Oracle Open World event,
September 28 - October 2,
2014 San Francisco, USA.
Booth 115, Moscone South.
Why not pay us a visit?
6 OTech Magazine #3 May 2014
THE COST OF
TECHNOLOGY Sten Vesterli
www.vesterli.com
twitter.com/stenvesterli
www.facebook.com/
it.more.than.code
dk.linkedin.com/in/stenvesterli
7 OTech Magazine #3 May 2014
In the United Kingdom, the National Health Service (NHS) has just
given Microsoft more than 5 million pounds (equivalent to 9 million
U.S. dollars). This is money that could have paid for 7,000 eye operations
or 700 heart bypass operations, but now it goes to Microsoft to pay for
extended support for Windows XP. The reason for this cost is that the
NHS is a technological laggard, still running thousands of obsolete
Windows XP installations.
The Technology Adoption Lifecycle
The adoption of new technology follows a classical path called the tech-
nology adoption lifecycle. This is a widely applicable model that shows
how new practices spread through a population, originally based on
studies of new farm practices in the U.S. in the 1950s.
This model divides the population of individuals or organizations into fve
groups:
Innovators
Early adopters
Early majority
Late majority
Laggards
The distribution of these types follows a normal distribution
(bell curve) as shown below.
The innovators are the risk-takers. They are fnancially strong in order to
be able to make the necessary investments in unproven technology and
survive any bets that do not pay of.
The early adopters are open to innovation, focusing on using existing
technology to achieve improved business outcomes. They are open to
redesigning their processes and interactions when new technology
makes it attractive.
The early majority is more careful and prefers smaller, incremental im-
provements to existing processes. They like to see others lead the way
before they invest in technology or process improvements.
THE COST
OF TECHNOLOGY
1/4 Sten Vesterli
8 OTech Magazine #3 May 2014
had collected as others had retired these ancient systems. Shortly before
I left, the second last disc controller in the world for these systems went
up in fames, leaving us running a system for which the last spare part had
run out.
Considered as a working museum, it was interesting. Considered as a
professional IT organization, it was indefensibly irresponsible.
The cost of being an innovator
Innovators spend a lot of money on their technology, betting that they
will recoup their investments later. They are risk-takers and sometimes
their bets do not pay of during the dot-com bubble of the late 1990s,
companies like Webvan spent literally billions but ended up going out of
business.
Many years ago, when Microsoft had just released the frst version of
Microsoft Windows, the young Bill Gates went to the most prominent
software companies of the day to try to persuade them to build a Win-
dows version of their software. But leading word processor WordPerfect
and leading spreadsheet Lotus 1-2-3 were happy with their dominance
of the character-based world and rejected Bill Gates. So he decided to
build his own applications to showcase what a modern Windows applica-
tion would look like. Microsoft invested heavily and built Word and Excel
which has come to completely dominate the market for ofce software
and has repaid the investment many, many times.
The late majority is conservative and will only implement new technol-
ogy when it is cheaper than the status quo. Sometimes, fnancially weak
organizations end up in this category even if they have the mindset from
one of the higher groups.
The laggards are extremely risk-averse and typically not well-informed
about technological trends. They will stick with tried-and-true solutions
even when everybody else has moved on.
The cost of being a laggard
Laggards incur a serious cost. They are stuck with unsupported software
versions from which there is no upgrade path and the necessary skills are
rare and expensive. Do you know what a COBOL programmer costs these
days?
Often, the main cost is fnancial, but sometimes laggards place the entire
organization at risk
Early in my career, I worked at an organization that was still running their
business on a very old mainframe computer. It had a disk drive the size
of a washing machine, with 14-inch removable stacks of magnetic disks
to store information. Occasionally, some component would fail, which
would be evident from actual smoke and fames in the server room. We
were paying a very expensive yearly support fee to the vendor, and the
vendor actually kept a stock of spare parts specifcally for us parts they
THE COST
OF TECHNOLOGY
2/4 Sten Vesterli
9 OTech Magazine #3 May 2014
This attrition explains why the laggards end up with the toxic combina-
tion of low beneft and high cost. Their application might have provided a
signifcant beneft when it was created 15 years ago, but now it ofers no
competitive advantage and is very expensive to maintain.
The late majority does not get much beneft because of beneft attrition,
but at least they are using well-known software where skills and support
resources are available and can easily be purchased ofshore.
The early majority is getting a middling beneft from their systems at a
cost comparable to the late majority.
The early adopters are getting a signifcant business beneft from their
systems because they continually renew them. Because they keep step-
ping forward, they do not drift to the left as the majority does. Their cost
tends to be slightly higher than the majority because they pick up new
tools and technologies early, before all the kinks have been worked out.
Finally, the innovators are furthest to the right on the beneft scale, but
they are also incurring a high cost because they tend to use cutting-edge
technology in radically new ways.
Improving yourself
You should make this kind of cost/beneft analysis continually for all of
your major IT systems. The important part is not to classify yourself into
one of the fve groups, but to follow where your systems are moving
over time. To track this, you need to gather some data on both costs and
benefts.
Companies like Amazon and Tesla have yet to show a signifcant proft,
but have stratospheric stock valuations. Why? Because investors ap-
preciate that they are innovators and have a chance of reaping outsize
rewards from dominating new markets.
Cost and Beneft
Every organization tries to achieve the best balance between cost and
beneft based on their understanding of the world. The cost is fairly
straightforward to calculate, but the beneft is relative. It depends on
what your competitors are doing, and the beneft sufers from attrition
it automatically becomes smaller over time as your competitors catch up.
THE COST
OF TECHNOLOGY
3/4 Sten Vesterli
10 OTech Magazine #3 May 2014
Cost is easiest to calculate. You can read your software, hardware and
support costs directly from your fnancial system, and you can allocate
your total personnel cost to the various systems you are running. Be-
cause many people will be supporting diferent systems, you need to
allocate their cost across the systems they support. A simple ratio (25%
of this persons time on that system) is enough. If you already have more
detailed time tracking implemented, you can also use this for additional
precision.
The beneft is harder to calculate. Some organizations can calculate a
direct fnancial beneft from a system (proft from sales on the web site),
but most will have to use other metrics. If your organization is already
measuring Key Performance Indicators (KPIs), you can use these as a
starting point. There will be other non-fnancial benefts that you need to
fgure out a way to measure consistently (customer satisfaction, churn,
service request resolution time, etc.).
Plot this measurement for each major system with regular intervals, for
example quarterly. This will make you aware of the beneft attrition as it
happens, and will prevent you from ending up with low benefts and high
costs.

THE COST
OF TECHNOLOGY
4/4 Sten Vesterli
11 OTech Magazine #3 May 2014
Sten Vesterli
www.vesterli.com
More Than Code
Copenhagen, Denmark
+45 26 81 86 87
info@more-than-code.com

@stenvesterli

www.facebook.com/it.more.than.code
OTECH PARTNER:
Success in IT depends on three things:
Good people, good processes, and good
technology. At More Than Code, we work
with all three, in any combination.
We help IT organizations build applications their users really love, we help you
choose the right technology, we help you operate at maximum efciency, and we
help individual IT developers become as happy and productive as possible. And if
you have chosen Oracle ADF as your technology, we have one of the worlds leading
experts to help you build amazing ADF applications as fast as possible.
Client Caser:
The users did not use the functionality of the new HR system, depending on their
own shadow systems based on spreadsheets or paper. After interviewing the us-
ers, we determined that the new system was too complex for casual users. We sug-
gested and designed a simple, user-friendly front-end to the system focusing only on
the tasks relevant for these users and achieved almost complete data coverage in
the central HR system and the elimination of shadow systems.
FOEX
Bui l d beaut i f ul a nd power f ul Appl i cat i ons usi ng
your ex i st i ng PL/ SQL a nd APEX Sk i l l s
www.tryfoexnow.com email: info@tryfoexnow.com
Weve enhanced APEX to make
it the perfect PL/SQL based
alternative to ADF and Forms
development, not just for your
developers but most importantly
for your end users.
FOEX Plugin Framework for Oracle APEX
Open multiple Applications within the Desktop Plugin
Single Page, multiple Components linked together
13 OTech Magazine #3 May 2014
WHY AND HOW
TO USE ORACLE
DATABASE REAL
APPLICATION
TESTING?
Talip Hakan
Ozturk
www.bankasya.com.tr
www.twitter.com/
taliphaknozturk
www.facebook.com/thozturk
www.linkedin.com/in/
taliphakanozturk
14 OTech Magazine #3 May 2014
The importance of competition is increasing day by day. Today, enterpris-
es have to ofer service quality to their customers to be at the forefront.
Improvements and investments in IT infrastructure are the foundation of
service quality. The Databases located in the center of IT infrastructure
have important place in terms of the quality of services. Any changes
made to our databases will be refected directly to our customers. So it
must be considered well before any changes are made. And the neces-
sary testing process also must be operated.
A. SQL Performance Analyzer (SPA)
System changes such as upgrading database, changing a parameter,
adding new index, database consolidation testing, confguration changes
to the OS, adding/removing hardware that afect execution plans of
SQL statements can signifcant impact on SQL statement performance.
These system changes may improve or regress SQL/Application perfor-
mance. For example, we may encounter with a surprise after upgrading
the production database. Only a SQL statement is enough to disrupt the
functioning of our database. In this situation, Database Administrators
spend enormous amounts of time identifying and fxing regressed SQL
statements.
We can predict SQL execution performance problems and prevent ser-
vice outage using SQL Performance Analyzer. Using it, we can compare
the performance degradation of SQLs by running the SQL statements
serially before and after the changes. SPA is integrated with SQL Tuning
Oracle Database Real Application Testing option addresses this testing
process. It ofers cost-efective and easy-to-use change assurance solution.
It enables you to perform real-world testing of your production database.
With this comprehensive solution, we can do a change in a test environ-
ment, measure performance degradation or improvements, take any cor-
rective action, and then to introduce the change to production systems.
Real Application Testing ofers two complementary solutions, SQL Perfor-
mance Analyzer(SPA) and Database Replay.
Figure 1 Lifecycle of change management
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
1/12 Talip Hakan Ozturk
15 OTech Magazine #3 May 2014
SPA Performance Analyzer or on remote database using database link.

Explain Plan Only - This method generates execution plans only for SQL
statements through SQL Performance Analyzer. This can be done on
the database running SPA Performance Analyzer or on remote data-
base using database link.
Convert SQL tuning set - This method converts the execution statistics
and plans stored in a SQL tuning set.
3 Making the system change (upgrading database, changing a param-
eter, adding new index, database consolidation testing, confguration
changes to the OS, adding/removing hardware, etc.)
4 Creating a post-change SQL trial. It is recommended that you create
the post-change SQL trial using the same method as the pre-change
SQL trial. After this step, a new SQL trial will be created and stored new
execution statistics/plans.
5 Comparing the SQL Statement Performance. SQL Performance Analyz-
er compares the pre-change and post-change SQL trials using metrics
like CPU time, User I/O time, Bufer gets, Physical I/O, Optimizer cost,
and I/O interconnect bytes. It produces a report identifying any chang-
es in execution plans or performance metrics of the SQL statements.
Set, SQL Tuning Advisor, and SQL Plan Management components. You
can capture SQL statements into a SQL Tuning Set, compare the perfor-
mance of SQL statements before and after the change by executing SPA
on SQL Tuning Set. Finally, we can tune regressed SQL statements using
SQL Tuning Advisor component.
There are 5 steps to evaluate system changes:
1 Capturing the SQL statement workload. There are several methods to
capture SQL workload such as AWR, Cursor Cache. Captured SQL state-
ments loaded into a SQL Tuning Set. SQL Tuning Set (STS) is a database
object that contains many SQL statements with execution statistics
and context. If SQL workload contains more SQL statements, it will
better simulate the state of the application. So we must capture as
many SQL statements as possible. It is possible to move STS from pro-
duction system to test system using export/import method. You should
install and confgure the test database environment (platform, hard-
ware, data, etc.) to match the database environment of the production
system as closely as possible.
2 Creating a pre-change SQL trial. It is possible to generate the perfor-
mance data needed for a SQL trial with SQL Performance Analyzer
using the following methods.
Test Execute - This method executes SQL statements through SQL Per-
formance Analyzer. This option can be done on the database running
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
2/12 Talip Hakan Ozturk
16 OTech Magazine #3 May 2014
1 Create a SQL Tuning Set (STS) on production database
BEGIN
DBMS_SQLTUNE.create_sqlset (sqlset_name => STS_TALIP, sqlset_owner => TALIP
);
END;
/
Load SQL Statements into created STS from cursor cache as belows.
DECLARE
sqlset_cur DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN sqlset_cur FOR
SELECT VALUE (p)
FROM table(DBMS_SQLTUNE.select_cursor_cache (NULL,
NULL,
NULL,
NULL,
NULL,
1,
NULL,
TYPICAL
)) p;
DBMS_SQLTUNE.load_sqlset (sqlset_name => STS_TALIP,
populate_cursor => sqlset_cur,
load_option => MERGE,
update_option => ACCUMULATE,
sqlset_owner => TALIP
);
END;
/
Figure 2 SQL Performance Analyzer Workfow
You can use SQL Performance Analyzer through DBMS_SQLPA API and
through Oracle Enterprise Manager interface.
The DBMS_SQLSPA package is a command line interface can be used to
test the impact of system changes on SQL performance. How to use SPA
through DBMS_SQLPA API? The following step by step example illus-
trates the impact of a new index creation.
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
3/12 Talip Hakan Ozturk
17 OTech Magazine #3 May 2014
staging_table_name => STG_TABLE,
staging_schema_owner => TALIP
);
END;
/
Now, the SQL Tuning Set (STS) is ready for analyzing.
3 The frst step is to create an analysis task by calling the create_analy-
sis_task procedure of the DBMS_SQLSPA package. This procedure cre-
ates an advisor task and sets its corresponding parameters according to
the user provided input arguments.
BEGIN
dbms.sqlpa.create_analysis_task
(
sqlset_name => TALIP_STS,
task_name => talip_spa_task,
description => index_creation_test
);
END;
/
You can verify the analysis task as follows.
SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where
task_name=talip_spa_task;
2 Move the created STS to test database
a To move STS, we must create a staging table.
SQL> BEGIN
DBMS_SQLTUNE.create_stgtab_sqlset (table_name => STG_TABLE,
schema_name => TALIP,
tablespace_name => TALIP_TS
);
END;
/
b Pack the STS into staging table
SQL> BEGIN
DBMS_SQLTUNE.pack_stgtab_sqlset (sqlset_name => STS_TALIP,
sqlset_owner => TALIP,
staging_table_name => STG_TABLE,
staging_schema_owner => TALIP
);
END;
/
c Export the staging table and copy export fle to test system.
# expdp talip@dbtalip directory=export dumpfle=stg_table.dmp logfle=stg_table.log
tables=talip.stg_table
e Unpack staging table to STS.
BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset (sqlset_name => STS_TALIP,
sqlset_owner => TALIP,
replace => TRUE,
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
4/12 Talip Hakan Ozturk
18 OTech Magazine #3 May 2014
(
task_name => talip_spa_task,
execution_type => explain plan,
execution_name => second_trial
);
END;
/
6 We can compare the result of frst and second trial using same proce-
dure.
BEGIN
dbms_sqlpa.execute_analysis_task
(
task_name => talip_spa_task,
execution_type => compare performance,
execution_name => analyze_result,
execution_params => dbms_advisor.arglist(
execution_name1, frst_trial,
execution_name2, second_trial,
comparison_metric, optimizer_cost)
);
END;
/
7 When the analysis task execution is completed, the comparison
results can be generated in HTML/TEXT format by calling the report_
analysis_task procedure as follows.
SQL> set heading off long 1000000000 longchunksize 10000 echo off;
set linesize 1000 trimspool on;
spool report.html
select xmltype(dbms_sqlpa.report_analysis_task(talip_spa_task, html, top_
4 When the analysis task is successfully created, it is at an initial state.
Now, it is time to build the SQL performance data before making the
change.
BEGIN
dbms_sqlpa.execute_analysis_task
(
task_name => talip_spa_task,
execution_type => explain plan,
execution_name => frst_trial
);
END;
/
The procedure called execute_analysis_task is invoked with the execu-
tion_type argument set to explain plan makes the Analyzer produce
execution plans only. If we invoked with the execution_type argument
set to test execute then it requests SQL Performance Analyzer to ex-
ecute all SQL statements in the SQL tuning set in order to generate their
execution plans as well as their execution statistics.
You can check the task execution status as follows.
SQL> select task_name, execution_name, execution_start, execution_
end, status from dba_advisor_executions where task_name=talip_spa_
task order by execution_end;
5 Now, we can create new index on test database and call again the
execute_analysis_task procedure using the same arguments.
BEGIN
dbms_sqlpa.execute_analysis_task
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
5/12 Talip Hakan Ozturk
19 OTech Magazine #3 May 2014
Figure 4 Regressed SQL statements in the comparison report.
It is also possible to see changed new plans using dba_advisor_sqlplans
view.
B. Database Replay
Database Replay solution enables real-world testing of production sys-
tem changes such as database upgrades, patches, confguration changes
(Single instance to RAC or far from it), data storage changes (ASM fail-
groups, Storage SRDF confguration, etc.), fle system changes (OCFS2 to
sql => 500) ).getclobval(0,0)
from dual;
spool off
You can see the comparison report summary generated in HTML for-
mat in fgure 3-4. In this reports, there are improved and regressed SQL
statements after change. You must analyze changed execution plans of
regressed SQL statements.
Figure 3 The comparison report summary generated in HTML format.
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
6/12 Talip Hakan Ozturk
20 OTech Magazine #3 May 2014
(Workload Replay Client) replays the preprocessed capture fles in test
database. Using the WRC tool in calibration mode you can determine
the number of wrc client to replay process.
4 Analyzing the result. You can perform detailed analysis of workload
during capture and replay process. You can also take AWR (Automatic
Workload Repository) reports to compare performance statistics dur-
ing capture and replay process. I think, AWR report is the best method
for detailed analysis.


Figure 5 Database Replay Workfow
ASM), operating system changes (Windows, Linux, Solaris), and database
consolidation testing projects using Oracle 12c Multitenant Databases,
etc. It captures the whole production database workload including all
concurrency, dependencies and timing. After capturing real-world pro-
duction workload, it replays the workload on the test database.
There are 4 steps to evaluate system changes.
1 Capturing the production workload. After the starting capture process,
all client activities including SQL queries, DML statements, DDL state-
ments, PL/SQL blocks and Remote Procedure Calls are stored in binary
fles which extension likes wcr_rec*,wcr_capture.wmd and wcr_*.rec .
These binary fles contain all information about client requests such as
SQL statements, bind variables, etc. Captured fles are stored in an Ora-
cle directory that we created in frst step with the CREATE DIRECTORY
statement.
2 Preprocessing the workload. After the capturing production workload,
you must be copied the captured fles to test system. In this step, the
captured data stored in binary fles must be prepared to replay pro-
cess. This step creates the metadata needed for replaying the work-
load.
3 Replaying the workload. The data in production database and test
database must be same. So you must restore the backup taken before
capture process to the test database. And also you must make nec-
essary system change on test system. A client program called WRC
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
7/12 Talip Hakan Ozturk
21 OTech Magazine #3 May 2014
not valid in subsequent releases. After upgrading the database, you must
remove the parameter from the parameter fle otherwise, the database
will fail when it start up.
BEGIN
dbms_workload_capture.start_capture
(
name => CAPTURE,
dir => CAPTUREDIR,
default_action => INCLUDE
);
END;
/
3 You can flter out activities based on instance, program, user, module,
and so on. Conversely, you can also record specifc types of activities.
SQL> exec dbms_workload_capture.ADD_FILTER ( fname =>INSTANCE_NUMBER,
fattribute => INSTANCE_NUMBER,
fvalue =>1);
SQL> exec dbms_workload_capture.ADD_FILTER ( fname =>USERNAME,
fattribute => USER,
fvalue =>TALIP);
SQL> exec dbms_workload_capture.DELETE_FILTER(fname=>USERNAME);
Database replay can be used via both the command line interface and
Oracle Enterprise Manager. Note that Oracle Database releases 10.2.0.4
and above support Enterprise Manager functionality for capture/replay of
workloads. The replay process can only be performed on Oracle Data-
base 11g and higher versions.
The following step by step example illustrates using database replay from
command line interface (CLI).
1 Before starting the capture process, it is recommended that restart
the database to ensure that ongoing and dependent transactions are
allowed to be completed or rolled back before the capture begins. It is
recommended but not required. Because, you can not restart the busi-
ness critical production database running on 24 X 7 basis. First we need
to create a directory object in the database where capture fles will be
stored.
# mkdir /data1/dbreplay
SQL> CREATE DIRECTORY capturedir AS /data1/dbreplay;
2 Take a RMAN level 0 backup of production database to restore target
test system and start the capture job.
Note that, to capture pre 11g databases you must set PRE_11G_ENABLE_
CAPTURE initialization parameter to value TRUE. This parameter can
only be used with Oracle Database 10g Release 2 (10.2). This parameter is
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
8/12 Talip Hakan Ozturk
22 OTech Magazine #3 May 2014
restore database;
recover database;
}
Now, the test database is same as the production database was when the
workload capture started.
7 First we need to create a directory object in the database where the
copied capture fles are stored.
SQL> CREATE DIRECTORY replaydir AS /data1/dbreplay;
8 We need to preprocess the captured fles on test system. It creates
necessary metadata to replay. Preprocess is required only once per
capture. After preprocessing the captured fles, you can replay it
many times.
BEGIN
dbms_workload_replay.process_capture
(
capture_dir => REPLAYDIR
);
END;
/
9 The test database is now ready for replay process. Let initialize replay
process as belows.
BEGIN
4 Let the capture process run long enough. For example, for a core
banking database, you can capture the database in peak hours
through branches and channels, batch job execution and end-day op-
erations. After running the capture process long enough, you can stop
it as belows.
SQL> exec dbms_workload_capture.fnish_capture;
5 Navigate to the /data1/dbreplay directory. The captured workload fles
are located in this directory. Copy these fles to the target test system
(via ftp, scp, etc.). Before the capture process, an AWR snapshot is
taken by database. After the capture process, it is also taken and ex-
ported to /data1/dbreplay directory automatically.
You can also export the AWR report of capture process when it needed.
Select the capture_id using the DBA_WORKLOAD_CAPTURES view and
export necessary AWR snapshots as belows. 27 is the capture id.
SQL> exec dbms_workload_capture.export_awr(27);
6 Restore the backup of the production database taken before cap-
ture process on the target test system. You must recover it up to the
minute the capture started. You can use RMAN set until time clause
for it as belows.
RMAN> run {
set until time 2014-02-19 10:08:46;
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
9/12 Talip Hakan Ozturk
23 OTech Magazine #3 May 2014
We can also set the scale_up_multiplier parameter defnes the number of
times the workload is scaled up during replay. Each captured session will
be replayed concurrently for as many times specifed by this parameter.
However, only one session in each set of identical replay sessions will ex-
ecute both queries and updates. The rest of the sessions will only execute
queries. For example,
BEGIN
dbms_workload_replay.prepare_replay
(
scale_up_multiplier => 10
);
END;
/
11 Start a replay client from the command line, using the wrc command.

# $ORACLE_HOME/bin/wrc userid=system password=***** replaydir=/data1/dbreplay
It gives a message like below.
Wait for the replay to start (10:11:26)
Note that the number of wrc clients that needs to be started depends on
the captured workload. To fnd number of wrc clients that needs to be
started, you must execute wrc utility with calibrate mode as belows.
dbms_workload_replay.initialize_replay
(
replay_name => REPLAY,
replay_dir => CAPTUREDIR
);
END;
/
10 You can specify some parameters below.
synchronization: whether or not commit order is preserved
connect_time_scale:scales the time elapsed between the start of the
replay and the start of each session
think_time_scale:scales the time elapsed between two successive user
calls from the same session
think_time_auto_correct - Auto corrects the think time between calls
when user calls takes longer during the replay than during the capture
BEGIN
dbms_workload_replay.prepare_replay
(
synchronization => TRUE,
connect_time_scale => 100,
think_time_scale => 100,
think_time_auto_correct => FALSE
);
END;
/
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
10/12 Talip Hakan Ozturk
24 OTech Magazine #3 May 2014
l_cap_id NUMBER;
l_rep_id NUMBER;
BEGIN
l_cap_id := dbms_workload_replay.get_replay_info (dir => CAPTUREDIR);
SELECT MAX (id)
INTO l_rep_id
FROM dba_workload_replays
WHERE capture_id = l_cap_id;
:v_rep_rpt := dbms_workload_replay.report (replay_id => l_rep_id,
format => dbms_workload_capture.type_html
);
END;
/
PRINT :v_rep_rpt
You can also get reports in Oracle Enterprise Manager even you have
used CLI during replay process.
Figure 6 Database Replay comparison reports via Oracle Enterprise Manager.
# $ORACLE_HOME/bin/wrc userid=system password=***** mode=calibrate replaydir=/data1/
dbreplay
12 Start the replay process.
SQL> exec dbms_workload_replay.start_replay;
When the replay process starts, the wrc replay client displays a message
below.
Replay started (10:12:09)
When the replay process fnishes, the wrc replay client displays a mes-
sage below.
Replay fnished (04:53:03)
During the replay process we can pause, resume or cancel the process.
SQL> exec DBMS_WORKLOAD_REPLAY.PAUSE_REPLAY();
SQL> exec DBMS_WORKLOAD_REPLAY.RESUME_REPLAY();
SQL> exec DBMS_WORKLOAD_REPLAY.CANCEL_REPLAY();
13 The last step is analyzing and reporting. We can get replay report as
belows.
SQL>SET SERVEROUTPUT ON TRIMSPOOL ON LONG 500000 LINESIZE
200
VAR v_rep_rpt CLOB;
DECLARE
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
11/12 Talip Hakan Ozturk
25 OTech Magazine #3 May 2014
Please check the metalink document 463263.1 for Database Capture And
Replay common errors and reasons. That is all :-) Test and Enjoy it.
Summary:
Change made to our systems will be refected directly to our customers.
So it must be considered well before any changes are made. Using the
Oracle Database Real Application Testing option, you can easily manage
system changes with confdence. Real Application Testing helps you to
take lower change risks, lower system outages, and improve quality of
service. You can adopt new technologies with complacence.
WHY AND HOW TO USE ORACLE
DATABASE REAL APPLICATION TESTING?
12/12 Talip Hakan Ozturk
26 OTech Magazine #3 May 2014
ENTERPRISE
DEPLOYMENT OF
ORACLE FUSION
MIDDLEWARE
PRODUCTS
PART 2
Simon Haslam
www.veriton.com
Consultant at Veriton Ltd
Technical Director, O-box Products Ltd
twitter.com/simon_haslam
www.facebook.com/thozturk
uk.linkedin.com/in/simonhaslam
27 OTech Magazine #3 May 2014
Welcome to the second in a series of articles about building production-
grade Fusion Middleware platforms, focussing on the Enterprise Deploy-
ment Guides (EDGs). Hopefully you have already read Part 1 from the
last issue of OTech Magazine where I introduced the EDGs and why you
might want to use them.
So, to catch up where we left of, an EDG ofers a recipe of how to build
a secure and highly available system using one of the layered, or upper
stack product sets, such as SOA Suite or Identity Management. As an
Oracle-supplied blueprint it ofers a number of well thought out practic-
es, though, in my experience, you rarely implement a 100% EDG compliant
system for reasons which will hopefully become apparent.
There are some areas where I think you may consider deviating from an
EDG in this issue I am going to cover:
Physical versus Virtual Implementations
Licensing Considerations
Failover Approaches
Component Workloads
Lifecycle Requirements
Lets drill into each of these in a bit more depth and, to make it a little
easier to follow, here is one of the EDG diagrams I have stuck on the wall
in front of my desk!

Diagram 1: SOA EDG diagram of MySOACompany Topology with Oracle Service Bus
ENTERPRISE DEPLOYMENT OF ORACLE
FUSION MIDDLEWARE PRODUCTS
1/4 Simon Haslam
28 OTech Magazine #3 May 2014
There are two reasons that introducing virtual machines alters some de-
sign decisions though:
1) When you are using virtual machines you have a lot of fexibility
in terms of VM sizing, plus you can have as many of them as you like
(within reason). This encourages you to have one machine per function
so, where the EDG suggests multiple managed servers per host, you
may instead choose to have one managed server per VM as this can be
benefcial for administration and tuning.
2) Virtual machines give you a degree of location neutrality, above the
physical hardware. This may negate the need to use networking ab-
stractions, such as virtual hostnames and virtual IP addresses (VIPs)
which can then simplify the confguration within the VMs. For exam-
ple, if you put an Admin Server on its own VM you could save having to
confgure and manage a VIP for this purpose, instead leaving it up to
the hypervisor to ensure that the Admin Server VM is always running
somewhere. Incidentally this is the approach that Oracle has taken for
their Admin Servers in the WebLogic implementation on the Oracle
Database Appliance.
So separating out software components onto diferent (virtual) ma-
chines, and reducing the use of virtual (network) hosts are two areas you
may want to diverge from the EDGs suggestions.
The above diagram is taken from the SOA 11g EDG. This EDG actually con-
siders 4 product combinations SOA (by which we primarily mean BPEL)
and OAM alone, plus BAM, plus BPM, and fnally SOA with OSB. Please
forgive my excessive use of acronyms hopefully if you are working with
Fusion Middleware they will be familiar to you (though the meaning of
these abbreviations isnt too important for the purposes of this article).
The diagram shows the software components used, the hosts they run
on, and the communication channels between both themselves and the
outside world. I have zoomed in on the SOA with OSB combination as its
a very common use case and illustrates several possible EDG deviations.
Physical versus Virtual Implementations
It is now several years since I designed or built a production Fusion Mid-
dleware environment using operating systems running on bare metal
instead most middleware administrators have to deal with virtual
machines (VMs). With modern servers having tens of cores it is hard to
imagine many situations, certainly for the sort of mid-sized organisations
where I work, where you would use all of that compute power for a sin-
gle function. The EDGs talk about physical hosts where they discuss
virtual they are usually talking about virtual hostnames, i.e. a means
of abstracting the services hostname away from the underlying host.
Even though, for middleware, I think you can mostly treat physical hosts
and virtual machines the same, if an EDG described both virtual host-
names and the hostname of virtual machines that it could get quite
confusing!
ENTERPRISE DEPLOYMENT OF ORACLE
FUSION MIDDLEWARE PRODUCTS
2/4 Simon Haslam
29 OTech Magazine #3 May 2014
Furthermore when you start considering Disaster Recovery (DR) this is
an area that the EDGs dont cover instead they refer you to the Fusion
Middleware Disaster Recovery Guide . There are numerous DR alterna-
tives these days, especially when using virtualization your ultimate ap-
proach will depend on the network connectivity between sites, your RTO
and RPO s, and how much you want to use an Oracle-specifc method as
compared to something provided by the underlying infrastructure. So its
very important to consider DR from the start of your project as this will
probably infuence your architecture.
Component Workloads
A more subtle topic where you might want to have a diferent approach
to the EDG is with regards to relative sizes/locations of the various com-
ponents. A particular example of this the Web Services Manager Policy
Manager (WSM-PM) which is given its own managed server by the SOA
EDG you might decide that is oversized for your environment and co-
locate it in managed servers alongside other products. By and large the
EDGs appears to have made carefully considered decisions in this area
though so, if you do choose to ignore Oracles advice, make sure you
understand the ramifcations.
Licensing Considerations
Another consideration for most organisations is Oracle licensing. Prod-
ucts within the technology area described by a single EDG may have dif-
ferent prices. A good example is OSB and SOA Suite ($23,000 and $57,500
per Oracle Processor respectively) whilst SOA Suite includes OSB it is
cheaper to license the cores you need for OSB separately with the cheap-
er OSB-only licence. If we are not running on bare metal, the options to
partition your licences vary according to underlying hardware and soft-
ware , but in some cases licensing will be a good reason to decompose
the products on your physical hosts or VMs and deviate from the EDG.
Failover Approaches
Failover design, to handle the loss (planned or otherwise) of a single
hardware or software component, is very important for most production
systems. The EDGs suggest a middleware-led approach to failover. This
is usually by means of VIPs and Whole Server Migration for WebLogic,
or may involve a cluster manager of some sort (e.g. Oracle Clusterware
or software supplied by the hardware vendor). However, depending on
your requirements, some services may be need to be more highly avail-
able than others, mostly depending on whether the service is transac-
tional and customer-facing in nature. To avoid the relative complexity of
confguring failover in middleware, you could choose a hybrid approach
where some services, such as JMS, are failed over by WebLogic and oth-
ers, like the Admin Server, are failed over by a virtualization feature (like
Oracle VMs Live Migration or VMwares HA/vMotion).
ENTERPRISE DEPLOYMENT OF ORACLE
FUSION MIDDLEWARE PRODUCTS
3/4 Simon Haslam
30 OTech Magazine #3 May 2014
Lifecycle Requirements
Patching lifecycle is another factor which could infuence how you decide
to split out your software components. For example, do you want to
patch all components at the same time? If you look at Diagram 1 you will
see that SOA (BPEL) and OSB, whilst having their own managed servers,
share the same domain you might decide that the patching timeframes
and frequency, as well as the availability requirements, of these services
are diferent and so youd like to patch them independently, thus have
them in 2 separate domains. This is a trade-of though between fexibil-
ity and complexity; in fact Oracle SOA professionals seem undecided on
this, with as far as I can tell, a fairly even mix of both approaches used in
production environments.
So, hopefully this has given you some food for thought. For the next
article in this series we will cover a few more areas where you need to
diverge from the EDG approach, including security, network topology
and the occasional documentation error. In the meantime, if youre not
too familiar with the EDG for your chosen product set, I encourage you to
dive in, spin up a few virtual machines, and try out an EDG confguration
for yourself!
ENTERPRISE DEPLOYMENT OF ORACLE
FUSION MIDDLEWARE PRODUCTS
4/4 Simon Haslam
31 OTech Magazine #3 May 2014
32 OTech Magazine #3 May 2014
ABSOLUTELY
TYPICAL Patrick Barel
www.amis.nl
twitter.com/patch72
nl.linkedin.com/in/patrickbarel
33 OTech Magazine #3 May 2014
This article will convince database developers that types in the Oracle
Database are worth their salt - and more. With the recent improvements
in 11gR2, the pieces are available to complete the puzzle of structured
and modern programming with a touch of OO and more importantly to
create a decoupled, reusable API that exposes services based on tables
and views to clients that speak SQL, AQ, PL/SQL, Types, XML, JSON or
RESTful, through SQL*Net, JDBC or HTTP. We will show through many
examples how types and collections are defned, how they are used be-
tween SQL and PL/SQL and how they can be converted to and from XML
and JSON. Everyone doing PL/SQL programming will beneft immediately
from this article.
Every Database Developer should be aware of Types and Collections. For
structured programming, for optimal SQL to PL/SQL integration and for
interoperability to client application. This article introduces Types and Col-
lections, their OO capabilities and the conversion to XML and JSON.
Introduction
Types have been available in the Oracle database since day one. Every
column in a table is of a specifed type. For instance the SAL column of
the EMP table is a numeric type, where the column ENAME is a character
based type. They are only partially interchangeable. You can put a numer-
ic value into a character based type but you cannot put a character value
into a numeric type. These are so-called scalar data types. They can hold
exactly one value of a specifed type. There are also composite types, like
SDO_GEOMETRY which holds a combination of some scalar types and
some collection types. You can even add behavior to the type that will
run on the values of that instance of the type.
As you can see, types are a implemented the way object oriented lan-
guages do this. This is available since Oracle version 8.0 (which is eight-
oh, not eight-zero). User Defned Types (or UDTs) can be helpful to do
structured programming in PL/SQL.
We will show you how to create a simple User Defned Type (UDT). Then
a type which holds references to other types and how to add behavior
to the UDT. We will also see how we can convert these types to XML and
JSON fles. UDTs, in their many forms, can be used for the interaction be-
tween SQL and PL/SQL but also for interaction with the outside world, for
example through Java programs. They can used to do OO development
in a PL/SQL environment.
Defnition
To create a UDT which can be used in the SQL layer of the database you
have to create an object using the create (or replace) TYPE statement. To
create a UDT which can only be used in the PL/SQL layer you create a type
in a program.
An example of creating an object is shown on the left:
ABSOLUTELY
TYPICAL
1/19 Patrick Barel
34 OTech Magazine #3 May 2014
To address the diferent felds in the UDT you use the dot notation (<vari-
able_name>.<feld>).
DECLARE
l_person person_t := person_t
( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
);
BEGIN
l_person.frst_name := Jane;
l_person.gender := F;
END;
Functions
You can use the UDT as a parameter in for instance a function. This way
you can send in a complete set of values as a single parameter instead of
all separate parameters. This can help making your code not only more
readable, but also a bit more self-documenting (provided you use a logi-
cal name for the UDT). Note that a UDT can be both an IN and an OUT
parameter.
So, instead of creating a function like this:
CREATE OR REPLACE FUNCTION display_label ( frst_name_in IN VARCHAR2
, last_name_in IN VARCHAR2
, birthdate_in IN DATE
, gender_in IN varchar2) RETURN
VARCHAR2
IS
BEGIN
RETURN CASE gender_in
WHEN M THEN Mr.
WHEN F THEN Mrs.
END ||
|| frst_name_in ||
|| last_name_in ||
It looks a lot like the creation of a database table indicated on the right-
but data in the UDT is not persisted unless you use a database table to
store the data.
The UDT can be used as the data type in the creation of a table just as you
would use a scalar datatype:
CREATE TABLE t_person
( person person_t
)
Using a UDT doesnt make much sense here, but if we extend the type it
will make more sense.
Using the UDT in your PL/SQL code is a bit diferent from using a scalar
data type. Instead of just declaring the variable and then using it you
should instantiate (initialize) it, before you can use it. To instantiate the
variable you call the constructor of the UDT. When you create a UDT, a
default constructor is automatically created. This is called by sending in
all the values for the properties to a function which has the same name
as the UDT. After instantiating the variable it can be used like any other.
CREATE OR REPLACE TYPE person_t AS OBJECT
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
)
CREATE TABLE t_person
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
)
ABSOLUTELY
TYPICAL
2/19 Patrick Barel
35 OTech Magazine #3 May 2014
Complex types
Types can consist of other types. Suppose we have a UDT with the infor-
mation for the social profle. It might look something like this:
CREATE OR REPLACE TYPE social_profle_t AS OBJECT
( linkedin_account VARCHAR2(100)
, twitter_account VARCHAR2(100)
, facebook_account VARCHAR2(100)
, personal_blog VARCHAR2(100)
)
We can now extend our person type to include this social profle as an
attribute:
CREATE OR REPLACE TYPE person_t AS OBJECT
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
, social_profle social_profle_t
)
The social profle is now nested inside the person type. Creating an
instance of the person object gets a bit more complicated because the
social profle has to be created as an instance itself:
DECLARE
l_person person_t := person_t
( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
, social_profle_t( JohnDoe
, JohnTweets
, JohnOnFacebook
, http://johndoe.blogspot.com
)
);
BEGIN
dbms_output.put_line(display_label(l_person));
dbms_output.put_line(l_person.social_profle.personal_blog);
END;
|| ( || EXTRACT ( YEAR FROM birthdate_in) || );
END;
we can create a function like this:
CREATE OR REPLACE FUNCTION display_label (person_in IN person_t) RETURN
VARCHAR2
IS
BEGIN
RETURN CASE person_in.gender
WHEN M THEN Mr.
WHEN F THEN Mrs.
END ||
|| person_in.frst_name ||
|| person_in.last_name ||
|| ( || EXTRACT ( YEAR FROM person_in.birthdate) || );
END;
We can also call this function from SQL, like this:
SELECT display_label(person_t
( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
)
)
FROM dual
But it can also be used in PL/SQL, like this:
DECLARE
l_person person_t := person_t
( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
);
BEGIN
l_person.frst_name := Jane;
l_person.gender := F;
dbms_output.put_line(display_label(l_person));
END;
ABSOLUTELY
TYPICAL
3/19 Patrick Barel
36 OTech Magazine #3 May 2014
Feature Associative Ar-
ray
Nested Table VArray
SQL PL/SQL PL/SQL only SQL and PL/SQL SQL and PL/SQL
Dense -
Sparse
Sparse Initially Dense
Can become
sparse
Dense
Size Unlimited Unlimited Limited
Order Unordered Ordered Ordered
Usage Any set of data Any set of data Small sets of
data
Use in Table No Yes Yes
Most important diference is that the Associative Array can only be used
in PL/SQL, where the Nested Table and the VArray can be used in both
SQL and PL/SQL. Because the collections can be used in SQL they can also
be stored in tables. This is a bit against the normalization principle but it
can make sense in some cases.
An example of this could be a list of phone numbers. You create a type
phone_t like this:
CREATE OR REPLACE TYPE phone_t AS OBJECT
( phone_type VARCHAR2(30)
, phone_nr VARCHAR2(30)
)
Then you create a nested table based on this UDT:
CREATE OR REPLACE TYPE phone_ntt AS TABLE OF phone_t
As you can see in to example above, the function created still works. To
access the values of the nested UDT you use the chained dot notation.
First you point to the social profle attribute in the person variable and
then within that social profle you point to the attribute you want to ac-
cess.
Collections
Besides record type UDTs you can also create collections of instances of
scalar or other, e.g. user defned, types. Collection are either sparse or
dense arrays of homogeneous elements. You can think of them as tables
(thats why the Associative Array used to be called PL/SQL Table). There
are three types of collections available.
Associative Array
Nested Table
VArray
These collection types are similar though there are some diferences as
you can see in this table:
ABSOLUTELY
TYPICAL
4/19 Patrick Barel
37 OTech Magazine #3 May 2014
tional table.
Complex types
Types can be as complex as you would want them to be. The can consist
of scalars, other UDTs, Nested Tables, VArrays which in turn, except for
the scalars, can consist of everything mentioned before. Consider the
person UDT we created earlier, including the social profle. One of the
properties can be a list of phone numbers, so we can add the Nested
Table to the UDT. The nested table of phone numbers itself consists of
UDTs with the phone type and the phone number as properties.
CREATE OR REPLACE TYPE person_t AS OBJECT
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
, social_profle social_profle_t
, phone_numbers phone_ntt
)
Now that all is in place, the nested table can be used as a column in a
normal table:
CREATE TABLE persons
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, phone_nrs phone_ntt
)
NESTED TABLE phone_nrs STORE AS phone_nrs_ntt_tab
Since we are using a nested table as a column and there is no way of
telling how big this is going to become, you have to tell Oracle where
to store the data. If you were using a VArray, then Oracle would know
upfront how big it could become at maximum.
To use a VArray instead of a nested table you would use this to create
the VArray:
CREATE OR REPLACE TYPE phone_vat AS VARRAY(10) OF phone_t
And this for the table:
CREATE TABLE persons
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, phone_nrs phone_vat
)
Storing Nested Tables or VArrays in a table feels a bit strange, especially
when you are always normalizing your schema. It does make sense
though when you are building a data warehouse or some sort of histori-
cal data storage. You could for instance create a database table to hold
both the invoice header information and all its invoice lines in a single
record. In this case the invoice lines could be a Nested Table in the rela-
INSERT INTO persons ( frst_name
, last_name
, phone_nrs
)
VALUES ( 'John'
, 'Doe'
, phone_ntt
( phone_t
( 'business'
, '555-12345'
)
, phone_t
( 'private'
, '555-67890'
)
)
)
ABSOLUTELY
TYPICAL
5/19 Patrick Barel
38 OTech Magazine #3 May 2014
customer for instance may have a credit limit. For an employee we want
to know what job he or she is in. We can of course create entirely difer-
ent types based on their diferent usage, but that would mean we would
have to create code that does basically the same at least two times. The
Object Oriented way of approaching this would be that we create a UDT
with the common properties and then create children of this UDT with
the specifc properties added. We create the UDT like we did before, but
we add the keywords NOT FINAL indicating there can be children defned
under this type.
CREATE OR REPLACE TYPE person_t AS OBJECT
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
, social_profle social_profle_t
, phone_numbers phone_ntt
) NOT FINAL
Now we create the other two UDTs under this type
CREATE OR REPLACE TYPE employee_t UNDER person_t
( ID NUMBER(10)
, NAME VARCHAR2(30)
, job VARCHAR2(30)
, department_id NUMBER(10)
, hiredate DATE
, salary NUMBER(10,2)
)
And
CREATE OR REPLACE TYPE customer_t UNDER person_t
( company_name VARCHAR2(100)
, telephone_number VARCHAR2(15)
)
The code gets a bit more complicated, but all the data is kept together.
DECLARE
l_person person_t := person_t( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
, social_profle_t( JohnDoe
, JohnTweets
, JohnOnFacebook
, http://johndoe.
blogspot.com
)
, phone_ntt
( phone_t
( business
, 555-12345
)
, phone_t
( private
, 555-67890
)
)
);
BEGIN
dbms_output.put_line(display_label(l_person));
dbms_output.put_line(l_person.social_profle.personal_blog);
dbms_output.put_line(l_person.phone_numbers(1).phone_nr);
END;
The last line displays the phone number that is stored in the frst entry of
the nested table.
Type hierarchy
Types can be created as children of other types. For instance, a person
can just a person or more specifcally an employee or a customer. They
share some of the properties, but some properties are very specifc. A
ABSOLUTELY
TYPICAL
6/19 Patrick Barel
39 OTech Magazine #3 May 2014
the specifed type and not one of its subtypes. A customer is a person,
but a person is not necessarily a customer.
Using the TREAT (AS type) operator you cast the instance to a specifc
subtype. This way you can access the specifc attributes that are only
available in this subtype.
CREATE OR REPLACE FUNCTION display_label (person_in IN person_t) RETURN
VARCHAR2
IS
l_label VARCHAR2(32767);
l_customer customer_t;
l_employee employee_t;
BEGIN
l_label := CASE person_in.gender
WHEN M THEN Mr.
WHEN F THEN Mrs.
END ||
|| person_in.frst_name ||
|| person_in.last_name ||
|| ( || EXTRACT ( YEAR FROM person_in.birthdate) || );
-- check what the actual type is of the parameter sent in
CASE
-- when it is a person_t and not one of the subtypes
WHEN person_in IS OF (ONLY person_t) THEN
NULL;
-- when it is actually a customer_t
WHEN person_in IS OF (customer_t) THEN
l_customer := TREAT(person_in AS customer_t);
l_label := l_label || of company ||l_customer.company_name;
-- when it is actually an employee_t
WHEN person_in IS OF (employee_t) THEN
l_employee := TREAT(person_in AS employee_t);
l_label := l_label || function: ||l_employee.job||
in department ||l_employee.department_id;
END CASE;
RETURN l_label;
END;
We use the keyword UNDER <typename> to indicate that this type should
inherit all the properties of its specifed parent type. The subtype created
has multiple identities. For instance: The EMPLOYEE_T is both PERSON_T
and EMPLOYEE_T. Any code created that can handle a PERSON_T can
also handle an EMPLOYEE_T (and a CUSTOMER_T). But, besides handling
the common properties, it is also possible to handle a subtype in a spe-
cifc manner. As we saw earlier we can send a subtype as a parameter to
a program that expects its supertype. In the program, we can check what
was actually sent in and run the appropriate code. Using the IS OF opera-
tor you can check what is the actual type of the parameter that was sent
in. If you add the keyword ONLY then you check for the parameter being
ABSOLUTELY
TYPICAL
7/19 Patrick Barel
40 OTech Magazine #3 May 2014
access to the values of the properties of this instance. The instance
is referenced using the SELF keyword, so a property is referenced as
SELF.<propertyname>
CREATE OR REPLACE TYPE person_t AS OBJECT
( frst_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
, gender VARCHAR2(1)
, social_profle social_profle_t
, phone_numbers phone_ntt
, CONSTRUCTOR FUNCTION person_t RETURN SELF AS RESULT
CONSTRUCTOR FUNCTION person_t(frst_name_in IN VARCHAR2
,last_name_in IN VARCHAR2
,birthdate_in IN DATE
,gender_in IN VARCHAR2) RETURN SELF AS
RESULT,
, MEMBER FUNCTION display_label RETURN VARCHAR2
) NOT FINAL
This UDT has both constructors and a member function defned. We still
have to provide the implementation for these functions, which is done in
the BODY of the UDT:
CREATE OR REPLACE TYPE BODY person_t AS
CONSTRUCTOR FUNCTION person_t RETURN SELF AS RESULT
IS
BEGIN
self.frst_name := NULL;
self.last_name := NULL;
self.birthdate := NULL;
self.gender := NULL;
self.social_profle := NULL;
self.phone_numbers := NULL;
RETURN;
END;
CONSTRUCTOR FUNCTION person_t(frst_name_in IN VARCHAR2
,last_name_in IN VARCHAR2
,birthdate_in IN DATE
,gender_in IN VARCHAR2) RETURN SELF AS
Member functions
Besides creating a function that takes a UDT as a parameter, we can also
defne the function as part of the UDT. It is very OO to combine the data
and the behavior of that data. The behavior is defned in the specifcation
of the type and implemented in the body of the type. There are two main
types of member functions.
Constructor functions
Normal member functions
Constructor functions
When you instantiate a variable based on a UDT you call the constructor
function. A default constructor function is always created for you, but
you can add your own, overloaded, constructor functions to the type.
The default constructor expects you to send in values for every property
in the type. By creating (overloaded) constructors you can control what
properties need to be set when initiating an instance. A good practice is
to create a constructor without any parameters and to instantiate the
variable with NULL values for all properties. But you can also create a
constructor that takes just a couple of arguments and instantiates the
rest of the properties with NULL values.
Member functions
We created a function that accepts a UDT as a parameter. We can also
implement this function as a member function with the type. Instead
of accepting a parameter with the instance of the type the code has
ABSOLUTELY
TYPICAL
8/19 Patrick Barel
41 OTech Magazine #3 May 2014
den function we can reference the function defned in the super type by
casting the UDT to its supertype:
CREATE OR REPLACE TYPE customer_t UNDER person_t
( company_name VARCHAR2(100)
, telephone_number VARCHAR2(15)
, OVERRIDING MEMBER FUNCTION display_label RETURN VARCHAR2
)
And then the implementation:
CREATE OR REPLACE TYPE BODY customer_t AS
OVERRIDING MEMBER FUNCTION display_label RETURN VARCHAR2 IS
BEGIN
RETURN (self AS person_t).display_label -- display label of the parent
type
|| of company ||self.company_name;
END;
END;
In this example we use the display_label function as defned on the super
type and add some extra info to it.
Map and Order functions
By creating Map or Order functions we can use the UDT in the order by
clause of a SQL statement. You can defne either a Map or Order function,
not both. In the order function you defne the outcome of a comparison
with another instance of the UDT. The function takes the other instance
as an argument and returns -1 (when this instance comes frst), 1 (when
this instance comes last) or 0 (when they draw).
If you can come up with a scalar value based on the properties that can
be used to order the instances, then you can also create a map function,
RESULT IS
BEGIN
self.frst_name := frst_name_in;
self.last_name := last_name_in;
self.birthdate := birthdate_in;
self.gender := gender_in;
self.social_profle := NULL;
self.phone_numbers := NULL;
RETURN;
END;
MEMBER FUNCTION display_label RETURN VARCHAR2 IS
BEGIN
RETURN CASE self.gender
WHEN M THEN Mr.
WHEN F THEN Mrs.
END ||
|| self.frst_name ||
|| self.last_name ||
|| ( || extract(YEAR FROM self.birthdate) || );
END;
END;
If this UDT is created we can use it pretty much the same way we did
earlier, but now we can call the member function display_label and we do
not need a stand-alone function anymore.
DECLARE
l_person person_t := person_t( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
);
BEGIN
dbms_output.put_line(l_person.display_label);
END;
If we create a UDT under this base type, then the new UDT automatically
inherits the member function. We can also override the behavior of the
member function by OVERRIDING the member function. In this overrid-
ABSOLUTELY
TYPICAL
9/19 Patrick Barel
42 OTech Magazine #3 May 2014
Traditional approach Bulk processing approach
DECLARE
CURSOR c_emp IS
SELECT ename
FROM emp;

r_emp c_emp%ROWTYPE;
BEGIN
OPEN c_emp;
FETCH c_emp INTO r_emp;
WHILE c_emp%FOUND LOOP
dbms_output.put_line(r_emp.ename);
FETCH c_emp INTO r_emp;
END LOOP;
CLOSE c_emp;
END;
CREATE OR REPLACE TYPE enames_ntt AS
TABLE OF VARCHAR2(10);
DECLARE
CURSOR c_emp IS
SELECT ename
FROM emp;
l_emps enames_ntt;
BEGIN
OPEN c_emp;
FETCH c_emp BULK COLLECT INTO l_
emps;
CLOSE c_emp;

FOR indx IN l_emps.frst .. l_emps.
last LOOP
dbms_output.put_line(l_
emps(indx));
END LOOP;
END;
Table Functions
Another application of the collections is the creation of table functions.
These are functions that return a collection (instead of a single value) and
that can be queried from a SQL statement using the TABLE() operator.
Using this approach you can leverage all the possibilities of the PL/SQL
engine in the SQL engine. Be advised, that there are still context switches
going on, so if you can solve your issue in plain SQL then that is the pre-
ferred way.
instead of the order function. This function is more efcient, because the
order function has to be called repeatedly since it only compares two
objects at a time, where the map function maps the object into a scalar
value which is then used in the sort algorithm.
Bulk Processing
Besides creating UDTs that hold data for a specifed object, you can
also create sets of data. These sets can consist of scalars or even UDTs
you created. This way you can work with the data as if it were relational
tables, but actually they are in memory variables or values of the record
that is stored in the database. The most important use for collections
is probably the bulk processing. In traditional programming a cursor
is opened, a record is fetched from it, the data is being processed and
then onto the next record. Just as long as there are records available in
the cursor. Every time a record is being fetched, there are two context
switches. One from the PL/SQL Engine to the SQL engine and then one
back. This Row-By-Row approach is also referred to as Slow-By-Slow.
Using collections you can minimize the number of context switches be-
cause multiple rows are collected and then returned in a single pass. This
means all the data you selected in your cursor is available right away in
the program you are running. This can have a major impact on the mem-
ory usage, that is why you can limit the number of rows returned in one
roundtrip. This means a little more coding, but the performance benefts
are enormous.
ABSOLUTELY
TYPICAL
10/19 Patrick Barel
43 OTech Magazine #3 May 2014
data. There are functions to retrieve an element at a specifc path in the
document but also functions to extract a portion of the xml document.
There are numerous ways to construct XML in a database application. It
can for instance be loaded from a fle or created from an SQL statement
using XML specifc functions like XMLAgg, XMLElement, XMLForest and
others. But it can also be instantiated based on another UDT instance.
DECLARE
l_person person_t := person_t( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
);
l_xml XMLTYPE;
BEGIN
l_xml := XMLTYPE(l_person);
dbms_output.put_line(l_xml.getstringval);
END;
The output would be:
<PERSON_T>
<FIRST_NAME>John</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>29-12-72</BIRTHDATE>
<GENDER>M</GENDER>
</PERSON_T>
If you would want to convert a nested table to XML you will need a wrap-
per object. You cannot convert a Nested Table to XML directly. You can
however convert a UDT which holds a nested table to XML.
CREATE OR REPLACE TYPE persons_ntt IS TABLE OF person_t
Create a wrapper UDT to convert the Nested Table to XML.
CREATE OR REPLACE TYPE persons_wrap AS OBJECT
( persons persons_ntt)
First you create a collection type in the database. Notice that you can
only use VArrays and Nested Tables for this, since these are the only ones
available in the SQL layer:
CREATE OR REPLACE TYPE enames_ntt AS TABLE OF VARCHAR2(10)
Then you create the code that returns the collection:
CREATE OR REPLACE FUNCTION scrambled_enames RETURN enames_ntt
IS
CURSOR c_emp IS
SELECT ename
FROM emp;
l_returnvalue enames_ntt;
BEGIN
OPEN c_emp;
FETCH c_emp BULK COLLECT INTO l_returnvalue;
CLOSE c_emp;

FOR indx IN l_returnvalue.frst .. l_returnvalue.last LOOP
l_returnvalue(indx) := translate
(abcdefghijklmnopqrs
, srqponmlkjihgfedcba
, l_returnvalue(indx)
) ;
END LOOP;
RETURN l_returnvalue;
END;
Then you query this function as if it were a relational table:
SELECT *
FROM TABLE(scrambled_enames)
XML
XML is also stored in a specifc type in the Oracle Database. Even though
XML is just a plain text/ASCII fle, which could be stored in a varchar2 type
or (if it gets too big) in a clob type Oracle now provides us with the XML-
Type. This is a specialized type for handling XML. Besides storing the XML
content it also provides us with a lot of functions to manipulate the XML
ABSOLUTELY
TYPICAL
11/19 Patrick Barel
44 OTech Magazine #3 May 2014
<FIRST_NAME>John</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>29-12-72</BIRTHDATE>
<GENDER>M</GENDER>
<SOCIAL_PROFILE></SOCIAL_PROFILE>
<PHONE_NUMBERS></PHONE_NUMBERS>
</PERSON>
);
l_xml.toobject(l_person);
END;
Note that the tags should be in uppercase otherwise the conversion will
fail. Not all properties have to be present in the XML. If a tag doesnt exist
in the XML, the corresponding property will be NULL.
JSON
Sometimes XML is a bit heavy. It is quite a verbose method to store
the data. Every value in the document is surrounded by tags which tell
us which feld it is. This is where JSON may help. JSON consists of name-
value-pairs. Where XML is written like this: <FIRST_NAME>John</FIRST_
NAME> The JSON equivalent is: { FIRST_NAME : John }. Unfortunate-
ly there is no support for JSON like there is for XML in the database yet.
There is however an opensource library available that implements JSON
functionality. There is no implementation (yet) to convert a UDT to JSON
DECLARE
l_persons persons_ntt := persons_ntt(
person_t( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M)
,person_t( Jane
, Doe
, to_date(03-06-1976,MM-DD-YYYY)
, F)
);
l_xml XMLTYPE;
BEGIN
l_xml := XMLTYPE(persons_wrap(l_persons));
dbms_output.put_line(l_xml.getstringval);
END;
The output would be
1
:
<PERSONS_WRAP>
<PERSONS>
<PERSON_T>
<FIRST_NAME>John</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>29-12-72</BIRTHDATE>
<GENDER>M</GENDER>
</PERSON_T>
<PERSON_T>
<FIRST_NAME>Jane</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>06-03-76</BIRTHDATE>
<GENDER>F</GENDER>
</PERSON_T>
</PERSONS>
</PERSONS_WRAP>
As you can convert a UDT to XML, it can also be done vice versa.
DECLARE
l_xml XMLTYPE;
l_person person_t;
BEGIN
l_xml := XMLTYPE(<PERSON>
ABSOLUTELY
TYPICAL
12/19 Patrick Barel
45 OTech Magazine #3 May 2014
tion developers will use SQL to access the database. However, in apply-
ing some of the core concepts from Service Oriented Architecture and
basic good programming practice we quickly realize that it may not be
such a good idea to expose our data model so directly. Any change to the
data model may directly impact many users of our database. Yet we do
not want to be held back from creating improvements by such external
consumers. Additionally, having 3
rd
parties fre of SQL statements to our
database may result in pretty lousy SQL being executed which may lead
to serious performance issues. When it comes to data manipulation there
are even more reasons why direct access to our tables is undesirable. En-
forcing complex data constraints and coordinating transaction logic are
two important ones.
So instead of allowing direct access to our tables, we should be thinking
about publishing an API that encapsulates our data model and associated
business logic and presents a third party friendly API. Using views on top
of the data model is one way of implementing such an API, and if we use
Instead Of triggers along with those views we can route any DML to PL/
SQL packages that take care of business logic. Other options for imple-
menting an API include the native database web service option that was
introduced in Oracle Database 11g that allows us to publish SOAP Web
Services from the database or use of the Embedded PL/SQL Gateway to
expose simple HTTP services to be discussed a little bit later on. Note
that APEX 4.x provides a lot of help for creating such RESTful services, as
they are called.
directly, but PL/JSON implements functionality to convert XML to JSON.
Using XML as an intermediate step we can convert a UDT to JSON.
DECLARE
l_json json_list;
BEGIN
l_json :=
-- convert
json_ml.xmlstr2json(
-- a converted XML instance
XMLTYPE (
-- of a UDT instance
person_t( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
, M
)
).getstringval()
);
l_json.print;
END;
[PERSON_T, [FIRST_NAME, John], [LAST_NAME, Doe], [BIRTHDATE,
29-12-72], [GENDER, M]]
Publishing APIs
Exposing the functionality and data in our database to external consum-
ers is a frequent challenge. Traditionally, many applications and applica-
ABSOLUTELY
TYPICAL
13/19 Patrick Barel
46 OTech Magazine #3 May 2014
Despite all the data involved, the API itself can be very simple:

PACKAGE music_api

PROCEDURE search_for_cds
( p_cd_query IN cd_query_t
, p_cd_collection OUT cd_collection_t
);
The complexity is hidden away in the defnition of the UDTs involved:
TYPE song_t AS OBJECT
( title VARCHAR2(40)
, duration NUMBER(4,2)
)
TYPE song_list_t AS TABLE OF song_t
TYPE cd_t AS OBJECT
( title VARCHAR2(40)
, year NUMBER(4)
, artist VARCHAR2(40)
, track_list song_list_t
);
TYPE cd_collection_t AS TABLE OF cd_t
TYPE cd_query_t AS OBJECT
( title VARCHAR2(40)
, from_year NUMBER(4)
, to_year NUMBER(4)
, artist VARCHAR2(40)
)
TYPE song_t AS OBJECT
( title VARCHAR2(40)
Somewhere between the View approach and the HTTP based service way
of thinking is the option of publishing a PL/SQL API. In this case, we use
PL/SQL packages that defne the public interface in their Specifcation and
contain the frmly encapsulated implementation in their Body. Note that
the Web Services will typically be just another layer on top of such a PL/
SQL API.
When the operations supported in the interface need to leverage com-
plex, nested data structures such as an Order with Order Lines or a
Hotel Booking with all guests sharing a room UDTs are the perfect
vehicle to use. Using a single parameter, a complex data set can be
transferred. Because UDTs support if not enforce a structured program-
ming style inside the package body, the case for UDTs is even stronger.
And, the database adapter that is frequently used in Oracle SOA Suite and
Service Bus to integrate with the Oracle Database, knows very well how
to interact with PL/SQL APIs based on UDTs. In fact, many organizations
have adopted the use of UDT based PL/SQL APIs as their best practice for
making SOA Suite & Service Bus interact with the Database.
Lets take a look at an example of such a PL/SQL API. The API exposes a
search operation through which consumers can lookup CDs. This search
can be based on a number of search criteria currently title, artist, year
range. The result of the search is a collection of CDs with for each CD data
such as title and year of release and a listing of all songs. Per song, the
title and the duration are included.

ABSOLUTELY
TYPICAL
14/19 Patrick Barel
47 OTech Magazine #3 May 2014
er, working with UDTs is somewhat cumbersome in most cases. They are
usually pretty good at XML processing though. One approach then is to
add a wrapper around the UDT based API. This wrapper interacts in terms
of XML and converts to and from the UDT based API.
Such a wrapper could be as simple as:
PROCEDURE search_for_cds
( p_cd_query IN XMLTYPE
, p_cd_collection OUT XMLTYPE
) IS
l_cd_query cd_query_t;
l_cd_collection cd_collection_t;
BEGIN
p_cd_query.toobject(l_cd_query);
search_for_cds
( p_cd_query => l_cd_query
, p_cd_collection => l_cd_collection
);
p_cd_collection :=
XMLTYPE(jukebox_t(l_cd_collection));
END search_for_cds;
Some technologies have a hard time dealing with XMLType structures
and prefer to have their XML served in strings. That would call for anoth-
er wrapper layer, that converts XMLType to and from VARCHAR2. Again,
a simple feat to accomplish.
, duration NUMBER(4,2)
)
TYPE song_list_t AS TABLE OF song_t
TYPE cd_t AS OBJECT
( title VARCHAR2(40)
, year NUMBER(4)
, artist VARCHAR2(40)
, track_list song_list_t
);
TYPE cd_collection_t AS TABLE OF cd_t
TYPE cd_query_t AS OBJECT
( title VARCHAR2(40)
, from_year NUMBER(4)
, to_year NUMBER(4)
, artist VARCHAR2(40)
)
Implementing this API should be fairly straightforward for seasoned PL/
SQL programmers. An example implementation is available on this link:
http://bit.ly/1dCNDnV.
Interacting with such an API is also straightforward, from a number of
environments at least. PL/SQL programs can of course invoke the Music
API and process the results returned from it. The Database Adapter can
also invoke the API and process results returned from it taking care of
the conversion from and to XML that is the lingua franca inside the SOA
Suite and Service Bus.
Other technology settings may be able to interact with stored proce-
dures, but may have a problem in dealing with UDTs. For example, Java
programs can call stored procedures through most JDBC drivers. Howev-
ABSOLUTELY
TYPICAL
15/19 Patrick Barel
48 OTech Magazine #3 May 2014
RESTful Services
There are many defnitions in use for what RESTful services exactly are.
We will not go into the intricacies of that theoretical debate. The es-
sence is that a RESTful service exploits the core features of HTTP, can be
accessed over HTTP using simple HTTP calls (plain old GET & POST, and
GET & PUT for more advanced interaction). Messages exchanged with a
RESTful service can be in any format, although XML and especially JSON
are most common. RESTful services are stateless (do not remember a
conversation, only the current question).
RESTful services called for retrieving information are the simplest and
by far the most common. When the provider of the service tries to main-
tain a semblance of true RESTful-ness, such services are typically defned
around resources and suggest a simple drill down navigation style. A
sequence of RESTful calls in our world of Employees and Departments
could look like this:
PROCEDURE search_for_cds
( p_cd_query IN CLOB
, p_cd_collection OUT CLOB
) IS
l_cd_query XMLTYPE := XMLTYPE(p_cd_query);
l_cd_collection XMLTYPE;
BEGIN
search_for_cds
( p_cd_query => l_cd_query
, p_cd_collection => l_cd_collection
);
p_cd_collection :=
l_cd_collection.getClobVal();
END search_for_cds;
Of course through the use of PL/JSON, it is quite easy to also expose a
JSON based API. Converting from UDT through XMLType to JSON and
vice versa is an out of the box operation with PL/JSON after all.
ABSOLUTELY
TYPICAL
16/19 Patrick Barel
49 OTech Magazine #3 May 2014
The question then becomes: how to make such services with this style
of URL composition available from PL/SQL. We need an underlying
package HRM_API - that contains the following pseudo code:
gather the requested data using SQL
collect the data into an UDT
convert the UDT to XML and perhaps onward to JSON
write the converted result to the HTP bufer
HTTP GET requests in the format shown in the table above are received
by the Embedded PL/SQL Gateway (EPG) and have to be interpreted in
order to result in a call to the HRM_API package with the appropriate
parameters. Typically, an HTTP request handled by the Embedded PL/SQL
Gateway looks something like:
http://database_host:http_port/path/package.procedure?parameter1=val
ue&parameter2=value
To make the EPG work with the REST-style URL requires the use of a little
known feature in the dbms_epg package. This is the same package used
for creating the DAD database access descriptor that links a URL path
to a database schema.
HTTP GET request RESTful meaning
http://HRM_SERVER/hrma-
pi/rest/departments
List of all Department resources
http://HRM_SERVER/hrma-
pi/rest/departments/10
Details for resource Department
with identifier 10
http://HRM_SERVER/hrma-
pi/rest/departments/10/
employees
List of all detail resources of type
employee under Department re-
source with identifier 10
http://HRM_SERVER/hrma-
pi/rest/departments/10/
employees/4512
(could perhaps also be ac-
cessed as http://HRM_SERV-
ER/hrmapi/rest/employ-
ees/4512)
Details for employee resource with
identifier 4512
The response to these calls will typically be a text message in either XML
or JSON format. Such service calls can be made from virtually any tech-
nology environment even from within a browser in JavaScript. All mod-
ern day programming languages have ample support for making HTTP
calls. That is one of the key reasons for their popularity.
ABSOLUTELY
TYPICAL
17/19 Patrick Barel
50 OTech Magazine #3 May 2014
The signature of this procedure is very straightforward:
procedure handle_request (p_path in varchar2);
The parameter p_path will contain whatever comes in the URL after hrm-
restapi/rest. Looking back to the table of RESTful URLs, the procedure
may expect to have to deal with these values for p_path:
/departments
/departments/10
/departments/10/employees
/departments/10/employees/4512
/employees/4512
The article at http://bit.ly/1k3PjVx provides a complete example of the
source code for dealing with these URLs and implementing the RESTful
service.
Conclusion
Working with User Defned Types not only simplifes your code, instead
of sending fve parameters, you can now send a single parameter with all
fve values in it, it can also speed up the interaction between PL/SQL and
SQL, using the bulk operations. By adding behavior to the UDT you defne
logic as close to the data as you possibly can. Communicating with the
outside world can also be done using UDTs, that way hiding the datamod-
el and achieving a high level of decoupling.
The statement
begin
DBMS_EPG.create_dad
( dad_name => hrmrestapi
, path => /hrmapi/*
);
end;
creates a DAD called hrmrestapi that is associated with the path hrmapi.
Subsequently, this DAD is authorized to a specifc database schema, say
SCOTT or HR.
That then means that any HTTP request starting with
http://database_host:http_port/hrmapi
is routed to that database schema and expects to be handled by a pack-
age in that schema. The additional step we need to take is to confgure a
special handler package that interprets the REST-style URL for us. We do
so with a code like this:
BEGIN
dbms_epg.set_dad_attribute
( dad_name => hrmrestapi
, attr_name => path-alias
, attr_value => rest);
dbms_epg.set_dad_attribute
( dad_name => hrmrestapi
, attr_name => path-alias-procedure
, attr_value => hr.hrm_rest_api.handle_request);
END;
Here we instruct the EPG to send any request that arrives on the hrm-
restapi DAD and starts with rest (meaning: all requests like http://data-
base_host:http_port/hrmapi/rest/andsomethingelseintheurl) to the
handle_request procedure on the hrm_rest_api package.
ABSOLUTELY
TYPICAL
18/19 Patrick Barel
51 OTech Magazine #3 May 2014
Ref:
Basic Components of Oracle Objects -
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjbas.htm
Collections in Oracle Part 1 -
http://allthingsoracle.com/collections-in-oracle-pt-1/
Collections in Oracle Part 2 -
http://allthingsoracle.com/collections-in-oracle-part-2/
Bulk Processing in Oracle Part 1 -
http://allthingsoracle.com/bulk-processing-in-oracle-part-1/
Bulk Processing in Oracle Part 2 -
http://allthingsoracle.com/bulk-processing-in-oracle-part-2/
Using Table Functions -
http://technology.amis.nl/2014/03/31/using-table-functions-2/
PL/JSON
http://pljson.sourceforge.net/
Creating RESTful services on top of the Embedded PL/SQL Gateway -
http://technology.amis.nl/2011/01/30/no-jdbc-based-data-retrieval-in-java-
applications-reststyle-json-formatted-http-based-interaction-from-java-
to-database/
Implementing the Enterprise Service Bus Pattern to Expose Database
Backed Services
http://www.oracle.com/technetwork/articles/soa/jellema-esb-pat-
tern-1385306.html
ABSOLUTELY
TYPICAL
19/19 Patrick Barel
52 OTech Magazine #3 May 2014
Patrick Barel
www.amis.nl
Amis
Edisonbaan 15
3439 MN Nieuwegein
+31 (0) 30 601 6000
info@amis.nl
www.amis.nl

@AMIS_Services

https://www.facebook.com/AMIS.Services?ref=hl
OTECH PARTNER:
AMIS is internationally recognized for its deep technological insight in Oracle tech-
nology. This knowledge is refected in the presentations we deliver at international
conferences such as Oracle OpenWorld, Hotsos and many user conferences around
the world. Another source of information is the famous AMIS Technology Blog, the
most referred to Oracle technology knowledge base outside the oracle.com domain.
However you arrived here, we appreciate your interest in AMIS.
AMIS delivers expertise worldwide. Our experts are often asked to:
- Advise on fundamental architectural decisions
- Advise on license-upgrade paths
- Share our knowledge with your Oracle team
- Give you a headstart when you start deploying Oracle
- Optimize Oracle infrastructures for performance
- Migrate mission-critical Oracle databases to cloud based infrastructures
- Bring crashed Oracle production systems back on-line
- Deliver a masterclass
Lucas Jellema
www.amis.nl
53 OTech Magazine #3 May 2014
STEP BY STEP
INSTALL ORACLE
GRID 11.2.0.3
ON SOLARIS 11.1
Osama Mustafa
www.gurussolutions.com
twitter.com/OsamaOracle
www.facebook.com/
osamaoracle
jo.linkedin.com/in/
osamamustafa/
54 OTech Magazine #3 May 2014
Introduction
Oracle Cluster ware is portable cluster software
that allows clustering of independent servers
so that they cooperate as a single system. Ora-
cle Cluster ware was frst released with Oracle
Database 10g Release 1 as the required cluster
technology for Oracle Real Application Clusters
(RAC). Oracle Cluster ware is an independent
cluster infrastructure, which is fully integrated
with Oracle RAC, capable of protecting any kind
of application in a failover cluster.
Oracle Grid Infrastructure introduces a new
server pool concept allowing the partitioning
of the grid into groups of servers. Role-sepa-
rated Management can be used by organiza-
tions, in which cluster, storage, and database
management functions are strictly separated.
Cluster-aware commands and an Enterprise
Manager based cluster and resource manage-
ment simplify grid management regardless of
size. Further enhancements in Oracle ASM, like
the new ASM cluster fle system or the new
dynamic
Volume manager, complete Oracles new Grid
Infrastructure solution.
Now after you already know what the setup
will look like, Every Information about Loca-
tions, and Finally Operating system, during
installation for I faced lot of bugs Since Solaris
11.1 New But it was amazing experience and
learn something new.
Lets Start:
Step #1:
You need to know how the ect/hosts will look
like after adding IPs:
#########NODES#########
180.111.20.21 Test-db1
180.111.20.22 Test-db2
########################
#########NODE-One-IP###########
180.111.20.28 Test-db1-vip
10.0.0.1 Test-db1-priv
################################
#########NODE-Two-ip############
180.111.20.29 Test-db2-vip
10.0.0.2 Test-db2-priv
################################
######SCAN-IP##################
180.111.20.30 Test-db-scan
###############################
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
1/9 Osama Mustafa
55 OTech Magazine #3 May 2014
you copied the fles on server 2 this command
should be done on server 2. Depend where you
copied software.
share -F nfs -o rw /base
c. On All Server you can run The Mount com-
mand to Share all the fles and start setup,
More Easy Save time.
mount -F nfs Base-Server-IP:software-location-
on-remote-server mount-point-on-all-servers
Step #5: Prerequisite
As any Linux/Unix There is Prerequisite for Op-
erating system follow the below
1 Users And Groups
Oracle Solaris 11 provide you with Command
Called ZFS (amazing command to manage
File system), I Create Oracle Home Using this
command, also create mount point /u01
This is For Oracle User
zfs create -o mountpoint=/u01 rpool/u01
-geometry 1280x720 -inetd -query localhost -once
securitytypes=none
e. Finally Enable Services
svcadm restart gdm xvnc-inetd
svcadm disable gdm xvnc-inetd;
svcadm enable gdm xvnc-inetd
f. Just to make sure my work was Right I re-
start the Server and check vncsever again.
Step #4 (Optional):
1 I was thinking should I copy oracle software
4 times and its almost 35 GB so 35*4 you are
talking about huge miss of time so what I did
here copy the fles Once on one server and
Confgure NFS to share it between all Nodes,
more easy and Save your time.
a. First you have to enable NFS on All Server us-
ing the below command
svcadm enable nfs/server
svcadm restart nfs/server
b. If you copied fles on server one then this
command should be done on server one, if
Step #2:
Check OS Version Using the below command:
/usr/platform/uname I/sbin/prtdiag
Step #3 (Optional):
Because I was working remotely not directly
from Data Center, I confgure Vncserver to en-
able access to server GUI and Run the Installer
from there.
a. Install Required Package using
pkg info SUNWxvnc
b. Add the below line to /etc/services or use
step d as command line
vnc-server 5900/tcp # Xvnc
c. Confgure /etc/X11/gdm/custom/conf
[xdmcp]
Enable=true
[security]
DisallowTCP=false
AllowRoot=true
AllowRemoteRoot=true
d. Instead of step b you can do the below , I
just want to mention both :
svccfg -s x11-server setprop options/tcp_listen=true
svccfg -s xvnc-inetd setprop inetd/wait=true
svccfg -s xvnc-inetd
setprop inetd_start/exec=astring:/usr/bin/Xvnc
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
2/9 Osama Mustafa
56 OTech Magazine #3 May 2014
2. Across the broadcast domain as defned for
the private interconnect
3. On the IP address subnet ranges 224.0.0.0/24
and 230.0.1.0/24
Regarding to oracle to you need to check udp
time using the below command
ndd /dev/udp udp_xmit_hiwat
ndd /dev/udp udp_recv_hiwat
To avoid reboot you can set it on memory and
reboot time using the below
On Memory
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
On Reboot
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
Step #8:
In this step you need to make sure of disks on
both nodes, in my case I am using EMC storage.
List Disk Using
/usr/sbin/format
Create necessary Directory for Our Installation
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
chown grid:oinstall /u01/app/11.2.0/grid
chown grid:oinstall /u01/app/grid
#!important!# chown -R grid:oinstall /u01
Step #6:
Confgure .Profle which is located in /export/
home/oracle and /export/home/grid
export ORACLE_BASE=/u01/app/oracle/
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db-
home_1
export GRID_HOME=/u01/app/11.2.0/grid/
export ORACLE_SID=TEST1
export
PATH=$PATH:/usr/sbin:/usr/X11/bin:/usr/dt/bin:/usr/
openwin/bin:/usr/sfw/bin:/usr/sfw/sbin: /usr/ccs/bin: /
usr/local/bin:/usr/local/sbin:
$ORACLE_HOME /bin:$GRID_HOME/bin:.
Note: copy .profle to all Database
Servers using scp command
cd /export/home/oracle
scp .profle oracle@Server-ip:/export/home/oracle/
Step #7:
Oracle Grid and Networking Notes:
1. The broadcast must work across any confg-
ured VLANs as used by the public or private
interfaces.
Create Groups
groupadd -g 1001 oinstall
groupadd -g 1002 dba
groupadd -g 1003 oper
Create Oracle User
zfs create -o mountpoint=/export/home/oracle rpool/ex-
port/home/oracle
useradd -g oinstall -G dba oracle
passwd oracle
Create necessary Directory for Our Installation
chown -R oracle:oinstall /export/home/oracle
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
chown -R oracle:oinstall /u01
This is For Grid User
Create Group for Grid User
groupadd -g 1020 asmadmin
groupadd -g 1022 asmoper
groupadd -g 1021 asmdba
Create Grid User
zfs create -o mountpoint=/export/home/grid rpool/ex-
port/home/grid
useradd -g oinstall -G dba grid
usermod -g oinstall -G dba,asmdba grid
passwd grid
chown -R grid:oinstall /export/home/grid
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
3/9 Osama Mustafa
57 OTech Magazine #3 May 2014
To make sure new memory value efect applied
on oracle and grid user open new terminal and
run prctl command again.
Step #12:
During installation Oracle will check swap
memory so you need to increase swap memory
depend on your setup for sure I will use ZFS
command.
Check swap value
bash-3.00# swap -lh
swapfle dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 4194288
4194288
oracle::::defaultpriv=basic,net_privaddr;roles=root
grid::::defaultpriv=basic,net_privaddr;roles=root
Step #11:
Now we need to confgure memory Parameter
for Oracle and Grid User and make permanent
To check memory Current Value for Both user
prctl -n project.max-shm-memory -i process $$
Modify memory values using
projmod -a -K project.max-shm-
memory=(privileged,32G,deny) -U oracle default
projmod -a -K project.max-shm-
memory=(privileged,32G,deny) -U grid default
projmod -s -K project.max-shm-
memory=(privileged,32G,deny) default
fdisk the raw disk using fdisk command.
Change owner to grid using
Chown grid:asmadmin /dev/rdsk/Disk-name
Chmod 660 /dev/rdsk/disk-name
You can list your available using
ls ltr /dev/rdsk/emc*
Change owner for these disk
Chown 660 emc*
Chown grid:asmadmin emc*
Step #9:
Disable ntp using
svcadm disable ntp
Step #10:
By Default Oracle Solaris SPARC prevent root
access, for example we created oracle and grid
user but we cannot access to root using su
command to enable root access for oracle and
grid do below
Edit fle /etc/user_attr

Add the below lines
Note: In Unix Each Disk have slide from 0-6 like the below :
crw-rw---- 1 grid asmdba 302, 0 Apr 23 19:02 emcp@0:a,raw
crw-rw---- 1 grid asmdba 302, 8 Apr 23 20:38 emcp@1:a,raw
crw-rw---- 1 grid asmdba 302, 16 Apr 22 14:05 emcp@2:a,raw
crw-rw---- 1 grid asmdba 302, 24 Apr 23 21:00 emcp@3:a,raw
crw-rw---- 1 grid asmdba 302, 32 Apr 23 21:00 emcp@4:a,raw
crw-rw---- 1 grid asmdba 302, 40 Apr 23 21:00 emcp@5:a,raw
crw-rw---- 1 grid asmdba 302, 48 Apr 22 14:05 emcp@6:a,raw
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
4/9 Osama Mustafa
58 OTech Magazine #3 May 2014
ing the the Below command, Oracle Provide
you with new way, you can now confgure SSH
using sshsetup its already exists within media
For Oracle User
# ./sshUserSetup.sh -hosts node1 node2 node3 node4
-user oracle -advanced noPromptPassphrase
For Grid User
# ./sshUserSetup.sh -hosts node1 node2 node3 node4
-user grid -advanced noPromptPassphrase
Remove Swap using root user
bash-3.00# swap -d /dev/zvol/dsk/rpool/swap
Confgure New Swap
bash-3.00# zfs set volsize=20G rpool/swap
bash-3.00# swap -a /dev/zvol/dsk/rpool/swap
Check New Value
bash-3.00# swap -lh
Step #13:
One more step Confgure SSH Between Nodes
You can do this Step During installation or Us-
Congratulations you can start your setup now!!
Step #14:
Start Install Grid Infrastructure, in my case I
choose to install Software only then Confgure
ASM in this way if error appeared I will know
where to start troubleshooting. Follow the
screens ( this steps should be Done As Grid
User )
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
5/9 Osama Mustafa
59 OTech Magazine #3 May 2014
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
6/9 Osama Mustafa
60 OTech Magazine #3 May 2014
Step #15:
After success installation now I need to Con-
fgure ASM ( This Step should be Done as Grid
User) , From One Node Only.
export ORACLE_HOME=/u01/app/11.2.0/grid/
cd $ORACLE_HOME/bin
Run ./asmca
The Below Screen Should be Open
Three ASM Disks should be created:
1 DATA : DataFiles and Parameter fles
( should be big )
2 FRA : FLASH RECOVERY AREA
( should be big )
3 CRS : this Disk for OCR and VOTING
( 4-5G will be enough )
After installation done you have to see
MOUNTED on Both Node.
REDUNDANCY:
NORMAL REDUNDANCY - Two-way mirroring,
requiring two failure groups.
HIGH REDUNDANCY - Three-way mirroring,
requiring three failure groups.
EXTERNAL REDUNDANCY - No mirroring for
disks that are already protected using hard-
ware mirroring or RAID.
If you have hardware RAID it should be used in
preference to ASM redundancy, so this will be
the standard option for most installations.
Step #16:
Finally You Are done with Grid Infrastructure
and now you should confgure Database on
RAC (this step should be done as Oracle User),
Usually I am installing Software only then
Called dbca to confgure Instance Install Soft-
ware.
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
7/9 Osama Mustafa
61 OTech Magazine #3 May 2014
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
8/9 Osama Mustafa
62 OTech Magazine #3 May 2014
Done!!
You can reboot node to test your Setup.
Reference:
1- Oracle Documentation Here.
2- Oracle White Paper Here.
STEP BY STEP INSTALL ORACLE GRID
11.2.0.3 ON SOLARIS 11.1
9/9 Osama Mustafa
63 OTech Magazine #3 May 2014
THE RELEVANCE
OF THE USER
EXPERIENCE
BECAUSE A NAIVE APPROACH
TO THE USER EXPERIENCE IS
NO LONGER ACCEPTABLE
Lucas Jellema
www.amis.nl
twitter.com/lucasjellema
nl.linkedin.com/pub/
lucas-jellema/0/536/39b
64 OTech Magazine #3 May 2014
Ever since the SQL*Plus prompt was no longer acceptable as the only
way for users to access the contents of the database have Oracle devel-
opers been building user interfaces. That is where we got Oracle Forms
from (pka SQL*Forms). And that explain why for so long the user inter-
faces have basically been windows on data, supporting CRUD operations
and thereby enabling any task and process. Initially only character based,
running on a terminal and gradually morphing into the GUI on the desk-
top and onwards into the browser. But essentially still a window on data.
In recent years we have come or should I say been made to realize that
this somewhat nave approach to the user experience is no longer ac-
ceptable. Providing our users with a modern experience is not a luxury
it is critical. That experience determines the productivity of the users,
the quality of their work and in the end even their willingness and ability
to use the application at all. This article provides an overview of what the
current state of afairs is with regard to the user experience, focusing on
the world of Oracle products and Oracle technology. It briefy discusses
where have come from and how and why the world has changed. It will
provide some insights and inspiration for the evolution of the experience
you provide to your users.
The article is strongly infuenced by the Oracle Applications User Experi-
ence team and their Usable Apps initiative that sets the standard for the
next generation user experience with Oracle Applications. Their vision,
approach, examples, best practices and tools used inside Oracle to create
the next releases of for example the Fusion Application products, equally
apply to custom applications, especially those developed using Oracle Fu-
sion Middleware technology.
The defnition of User Experience employed by the Oracle Applications
UX team: A complete contextual experience an understanding of
everything that makes up an experience for a user who works with an ap-
plication: technologies, tools, business processes and fows, tasks, inter-
actions with others, physical and cultural work environments. This clearly
goes way beyond just the user interface!
Where we come from
In the early nineties, life was relatively easy for IT staf even if we may
not have realized it at the time. End users only used computer applica-
Figure: The huge infuence within Oracle from the User Experience is most clearly demonstrated in this Simplifed UI in the recent HCM Cloud
R8 release.
THE RELEVANCE OF
THE USER EXPERIENCE
1/25 Lucas Jellema
65 OTech Magazine #3 May 2014
not integration with other IT systems. File based data exchange per-
haps using fancy solutions including database links was about the ex-
tent of the integration. Each business application was represented by its
own icon on the users desktop and that was about as far as UI integra-
tion went. The Windows desktop was our idea of a mash up. Each appli-
cation was typically supported by its own team of designers and develop-
ers that took care of all aspects from UI to data model. In the world of
Oracle, they would initially use Oracle Forms 4.5 on top of the brand new
Oracle7 Database. Many are still using Forms and quite a few also still
that Oracle7 or perhaps the long ago de-supported Oracle 8i Database.
With the advent of the three tier architecture and the rise of the inter-
net and browser as the preferred application platform, not a whole lot
changed for enterprise applications. The distribution of the application
and any patches and upgrades became much simpler. However, Java
Applet technology, as employed by WebForms, made it possible to run
the original client/server desktop applications inside the browser. While
initially that kick-started the three tier architecture and the use of the
browser as the platform, it did little to innovate the user interface and the
whole user experience. Enter query/execute query remains pretty much
the same in the browser. The focus in the UI on opening up the database,
presenting a window on the data was still ubiquitous. User interaction
with the computer continued to be through mouse and keyboard
despite several promising but failed attempts at voice recognition.
tion at work, so whatever their enterprise application looked like, was
the standard. At that time, it was character based on simple terminals
with only keyboard interaction. The terminal could be monochrome gray,
green or orange on a black background. The type of terminal that until
fairly recently you could fnd at the check in desks at airports.
The Windows revolution introduced the Graphical User Interface (GUI).
The GUI ran locally on the Client that did all the user interaction things
and leveraged the server for data processing. Both users, designers and
developers were extremely happy with all the colors, buttons and mouse
movements and essentially continued to create the same type of table
and form style applications as before. However, the resolution increased,
the number of pixels multiplied and the data presented on the pages
of the applications magnifed substantially. Using tabs and popups, the
pages were virtually expanded to hold even more data.
This was very much the era of One size fts all. One application, running
on one device the desktop on one location the ofce - to be used by
all users regardless of their specifc role or task. All the data was available
on those virtually enlarged pages so all tasks were supported by the ap-
plication.
This also was the age of the silo. Each application was a world unto itself.
Using its own database, its own business logic and its own set of user
interfaces, the application typically did not have interaction and certainly
THE RELEVANCE OF
THE USER EXPERIENCE
2/25 Lucas Jellema
66 OTech Magazine #3 May 2014
interaction. And end users are starting to become seriously disgruntled
with the enterprise applications they are forced to deal with at work
when they know from their social media interaction, their online shop-
ping experience and their gaming what should be possible in terms of
using computers, also for doing ones work. When the enterprise user ex-
perience becomes so diferent from what users increasingly experience in
their own environment, they and their employers - do not beneft from
what has become intuitive and natural while doing their work, they do
not leverage the possibilities of the technology and the clear distinction
between work and not-work continues to exist both in time and space.
Productivity, quality of work, motivation are among the main factors that
will sufer from such a gap.
At the same time, the number of people interacting with the enterprise
information systems increases. Use of computers is pervasive: every role
in the modern organizations encompasses interaction with IT systems
to check, register, monitor, report, approve, order. From blue collar em-
ployees to the boardroom. Managers become users - rather than instruct-
ing their secretaries to send emails for them, managers have started
to directly engage with computers themselves even if it took a status
gadget like the iPad to get them there. The need for speed is another fac-
tor of course they cannot aford to wait until Monday when Janet is in
the ofce to print out the mails they have to digest and respond to.
Additionally, in order to make staf departments leaner (and less expen-
In terms of the technology available to the developers, little was changed
too. Pixel perfect positioning of a fairly limited set of UI components with
a distinct Windows look and feel to them was still the name of the game.
Only few ventured outside the applet or introduced webby newness
into the applet.
However, from somewhere in the middle of the frst decade of the
century, many things were slowly but steadily evolving, and would soon
change the picture. Dramatically.
State of the nation
Today, the world is quite diferent. When exactly the change came about
is hard to say. It was of course an evolutionary process with some ac-
celeration points. And the process of change was and is not the same for
all regions in the world, for all industries and for all companies. Important
change drivers in the position of applications and the requirements for
user experience include internet, mobile devices, touch devices, globali-
zation, 24/7, Cloud, Apple, Moores law (or the continued increase in com-
pute power), battery technology, movies (and their portrayal of future
technologies and man-machine interaction), wearables, social media, the
pace of the business.
At some point in the previous decade, the consumer experience with
computers overtook the enterprise experience. No longer is the work-
place in the lead when it comes to modern, fancy, advanced computer
THE RELEVANCE OF
THE USER EXPERIENCE
3/25 Lucas Jellema
67 OTech Magazine #3 May 2014
In this environment, everything is digital. On the edges of the system,
information may come in on paper or may be sent out in the form of let-
ter but anything in between is strictly digital. Scanning and interpreting
paper based data, managing digital content, fnding ways to express con-
tracts and signatures in a strictly digital way, are common challenges that
are part of the changing user experience. Complementing this digital con-
tent with new media sound, pictures and video is still relatively new
in enterprise applications, but is likely to become more commonplace.
VoIP and Skype-style video calls, integrated into enterprise applications
are not futuristic, but rather round the corner. Collecting visual inspection
results is already daily practice for many organizations.
It is amazing how much data processing power modern computers and
application developers - have at their disposal, and even more how little
of it is used to really facilitate the end user. Most applications [of the
recent past] simply show data. They do very little in terms of data pro-
cessing to turn that information into meaningful information. Instead, the
interpretation of the data is left as an exercise to the end user. Unless of
course the application is labeled BI (business intelligence). In the section
Visualization, we will revisit the desire of users to be facilitated in the job
they have to do, the responsibility they have to fulfll. They do not care
about data as such, they need facilities to do their tasks more efciently,
with higher quality and if at all possible, a little more conveniently. Typi-
cally they prefer information or better yet, insight and calls to action,
over just the raw data.
sive) organizations rely heavily on self-service style applications. Get
information about your remaining vacation days, call in sick and report
being well again, submit expense reports, learn about retirement facili-
ties, order ofce supplies, report a broken piece of equipment or suspi-
cious event? Human intervention with all of these activities is distinctly
lopsided: the reporter interacts with a self-service application (which is
sometimes more about self than about service)
Also quite importantly: external parties become users. Customers as well
as suppliers, regulators, and others increasingly engage directly with an
organizations enterprise IT systems .
With such diverse internal parties, a more intuitive user experiences with
as little learning curve as possible and therefore low training require-
ments is strongly desirable. With external parties it is virtually imperative.
Ten years ago, what was called the mobile workforce was a small van-
guard of sales representatives, on site inspectors and servicemen. Today,
to be mobile means to be able to interact with enterprise IT at any time,
from any location using any from a wide range of devices. Collabora-
tion, communication, quick decisions, constant monitoring a far larger
percentage of the employees of organizations engage in these activities,
not just during ofce hours and on premises, but more or less around the
clock. While at the airport or in a plane, from the car or the queue at the
supermarket, from a bench in the park or during commercial breaks from
couch. And yes, sometimes also from a desktop PC in a real ofce envi-
ronment.
THE RELEVANCE OF
THE USER EXPERIENCE
4/25 Lucas Jellema
68 OTech Magazine #3 May 2014
impression. The user experience is obviously the frst thing that counts
in that frst impression. When Oracle is competing with its cloud applica-
tions, the big challenger is not so much the on premises giant of old, SAP,
but rather classic cloud vendors like SalesForce or more recent vendors
with modern applications designed for the cloud and todays user experi-
ence, such as Workday.com.
What Comes Next
We are not alone in facing the user experience challenges of today. All
traditional vendors or enterprise applications have to deal with these
same challenges, including Oracle. Clearly, Oracle has never had a great
reputation for its user interfaces. Being frmly rooted in the database,
on the server side of the enterprise IT systems, most Oracle applications
do not have a reputation for a breezy, light weight, attractive, modern,
beautiful look and feel. And until not too long ago, that reputation or
lack of it was well deserved. Even the early 2000s initiative around the
Oracle Browser Look And Feel (or BLAF) guidelines, while relevant and
well-intended as well as consistent, was behind the game from the very
beginning.
A change has come at Oracle. Oracle wants to lead in User Experience.
Plain and simple. To that end, it has established the Applications User
Experience team (back in 2007) a relatively independent team within
Oracle that explores all kinds of UX options, conceptual and technology-
wise, and translates them into guidelines, templates, buildings blocks
Oracle E-Business Suite R11 Timesheet Semi-modern user experience
The user experience is not a luxury- to coddle a new generation of spoilt
bratty end users. The user experience is critical for efciency and for con-
stant, 24/7 engagement of employees. Just as Sheldon Cooper puts the
fun in funeral, there is no reason why a decent user experience should
not be used to improve the quality of the timesheet or expense report
application. Accessible, attractive user interfaces not only speed up the
actual process once an employee or customer has started to engage, it
also lowers the threshold to actually start the interaction, reduces the
number of errors made during the transaction and decreases the risk of
the user abandoning the process midway. This will save money on a seri-
ous scale.
For SaaS providers, there is even more at stake: they have to diferentiate
against the competition and they really have one chance to make a frst
THE RELEVANCE OF
THE USER EXPERIENCE
5/25 Lucas Jellema
69 OTech Magazine #3 May 2014
since the UI supports but a single task for a specifc user group. Such user
interfaces can be created in large numbers, each being small and cheap
and not meant to last. When the task defnition changes or a new type of
device is used for performing the task, either change or even replace the
UI. Because the business process and the services are likely to not change
so frequently, these can easily be reused for the next generation of user
interfaces.
Figure: the upended pyramid represents an architecture that consists of consolidated enterprise resources, reusable services and numerous,
small, tailored user interfaces
that make it possible to apply the UX vision to actual software. The team
does this frst and foremost for Oracles own Applications development
teams and makes most of their work available to outsiders to also make
use of when developing their own custom applications.
In this section, we will look at a number of concepts and designs that
have come out of this UX team and that have had and continue to have
tremendous infuence on the way the Fusion Applications look such as
HCM Cloud and Sales Cloud, that take the lead in the UX evolution. Slowly
but steadily, the infuence from the Applications User Experience team
permeates in other Oracle products and through the technology compo-
nents into custom built applications as well, perhaps yours included.
The team has developed a number of key concepts and messages and
turned them into actual software as well. A number of their ground rules
are discussed next, sometimes interpreted somewhat liberally or ex-
pressed in my very own words.
One Size does NOT Fit All
Applications should not try to be a one size fts all solution, where a single
user interface attempts to satisfy all users, internal and external, in all
their respective roles and through all their individual tasks. User interfac-
es instead should be created as small, highly focused and specifc interac-
tion vehicles for specifc tasks. If the UI can be built on top of rich reus-
able business services that expose data and operations, the development
of the user interface itself can be relatively simple and cheap, especially
THE RELEVANCE OF
THE USER EXPERIENCE
6/25 Lucas Jellema
70 OTech Magazine #3 May 2014
90:90:10
If you do not have the option to rigorously replace existing enterprise
applications with task-specifc user interfaces , you have other options.
The 90:90:10 rules states that 10% of the functionality of typical enter-
prise applications is used by 90% of the users during 90% of their time. In
other words: a fairly small portion of the application takes most of the
heat. Most users hardly ever see more of the application than that 10%. By
focusing your eforts primarily on that 10% of the functionality in terms of
providing the next generation user experience pays of. It is the most vis-
ible part that impacts most people and therefore helps realize the biggest
gain in productivity.
Simplifed UI
The 90:90:10 rule is an important driver for what the Oracle UX team
calls: the simplifed UI. The 90% group consists largely of infrequent or
at least not full-time-heads-down power users. They perform self-service
tasks that do not require the full power of the enterprise application
but only about 10%, as the rule states. The simplifed UI is a wrapper
around the enterprise application platform. The tasks that are surfaced
and highlighted in the simplifed user interface represent the 10 percent
of tasks that 90 percent of people are doing 90 percent of the time.
Note: In the section Getting Started, a quick introduction is provided in
how any organization can created this type of user interface using a num-
ber of core ADF components.
Figure: Simplifed UI in Oracle HCM Cloud
Figure: Screenshot from the Simplifed UI in Oracle HCM Cloud R8
THE RELEVANCE OF
THE USER EXPERIENCE
7/25 Lucas Jellema
71 OTech Magazine #3 May 2014
the UX team and formally known as Vice President of Oracle Applications
User Experience, compares this with shopping for clothes. The frst step
is glance over the rack, to quickly locate anything that might be interest-
ing. The next level is to get the hanger from the rack, hold up the piece of
garment before ones body, maybe show it to a companion or look in a
nearby mirror. The third step in the process is considerable more involved:
take the blouse or the pair of trousers to a dressing room to actually try it
on and in a considerable number of cases actually continue to buy it.
Obviously, glance is superfcial, quick and meant to distill examples of
potential interest. Scan takes these areas that deserve attention, and sub-
ject them to a closer inspection. That may result in either confrmation
that they are indeed of interest and further action is required commit
or the initial impression does not warrant more engagement right now.
This same three stage engagement model works well in the user inter-
face of an enterprise application. At the frst level, information that helps
the user quickly get an overview of the state of afairs in a certain context
and fnd the areas that deserve more attention. Then an easy drill down
step to the second level, where the selected area is presented with more
detail and context and where some quick actions can be performed
such as a quick approval, sending a note, adding a tag or comment or
making a simple update. When the engagement stretches beyond this
scan level, another in context navigation commences the real commit.
This means the user is virtually rolling up his sleeves, sits down, takes a
deep breath and gets ready for some real action.
Simplicity
The theme of simplicity is about as far as we can go from the use every
last pixel to put as much data on the page as we possibly can design
style that is so characteristic of many Forms applications and even later
day browser based user interfaces. Simplicity in the view of the UX team
means just the right amount. Of data, of functionality, of anything on the
pages of the user interface. The UI should contain what you most fre-
quently need in that context not the 90% stuf that only occasionally is
required. That is clutter most of the time and should not be in the way
all of the time. If the user relies on the fact that there is an easy access
path from the current context to that second level data and functionality,
then it is no problem to leave it out of the primary UI.
As the UX team states: We give users less to learn and more opportunity
to do their work. Our design is very approachable, touchable. Consumer
apps can only be successful if they are highly intuitive. In the fast moving
world of app stores, users wont accept any learning curve at all. And the
same principles can be applied to the enterprise app experience: very
intuitive, no training required to get going. The power of the enterprise
application is still there, and now we are presenting the user experience
in a way that anyone can use that power efortlessly.
Rules of Engagement - Glance, Scan, Commit
One way to provide a simple, intuitive experience is by recognizing the
fact that users frequently work in three stages. Jeremy Ashley, leader of
THE RELEVANCE OF
THE USER EXPERIENCE
8/25 Lucas Jellema
72 OTech Magazine #3 May 2014
security. Once the user commits, the navigation still should be simple to
activate. However, at this point, a serious mood change takes place on
the part of the user. She embarks on an activity that will not be done in
just a few seconds. That lightning fast application performance so desir-
able at the frst two levels is not so overridingly important any longer.
After glancing and scanning, the user is now buying into the way of doing
things on the enterprise application she has gone to where she need to
go to actually complete whatever task she is engaged in, which may be
that one size fts all screen for the power user.
Task & Process oriented
The user interface the users are dealing with should focus on the specifc
task a user needs to perform instead of ofering a generic window-on-
data such as the CRUD-style Forms of the Client/Server era that allows
all tasks and therefore supports none. In order to be truly intuitive, re-
quiring no training and making efcient execution of tasks possible, user
interfaces should be tailored to the role, process and task at hand.
This may require some thinking outside the box. For example: if a busi-
ness users responsibility in a business process is making a decision
based on fairly simple, straightforward information, then the best way to
ensure quick and painless contributions from that user may be by send-
ing that person an email that holds all information including a deadline
for making the decision as well as a hyperlink to activate for each of the
possible decision outcomes.
Figure: Screenshot from Oracle Sales Cloud R8 Glance level to quickly identify problem areas; clicking on any of these cards allows drill down
to the scan level where more details and additional context is available as well as some quick actions
In terms of the simplifed UI, the frst level glance needs to be at the
users fngertips. Hovering around at the scan level should be efort-
less, slick and quick. The drill down to the scan level should also be very
smooth. Easy to perform and rapidly executed, as should be the step
back from scan to glance. These operations should be accessible on
mobile devices, operable while standing in line at Starbucks or airport
THE RELEVANCE OF
THE USER EXPERIENCE
9/25 Lucas Jellema
73 OTech Magazine #3 May 2014
to step away from raw data and to present only information and better
yet to present the information in a way that ofers insight and suggests
relevant actions and decisions.
We have technology that is good at processing data in many ways, includ-
ing flter, structure & sort, abstract [away irrelevant details], aggregate,
associate/interpret and ultimately even predict. The next fgure shows
a very simple example of how merely structuring data can lead to much
easier to access information:
Figure: Right and Left is the same data. Counting the number of circles is much easier when a little structuring of the same data has been done
To be able to present information that is relevant to a user, we need of
course to understand
What are the users responsibilities?
What actions/decisions may have to be taken?
What information is required to perform an action?
Which information determines if an action should be taken?
It is defnitely not a good idea to have that user start up a client/server ap-
plication, browse through a three level nested drop down menu, open a
page, then enter search criteria and execute the query to get the relevant
records in context, have the user locate fve pieces of information on a
page flled to the brim with felds, checkboxes, radio groups and tabs
with even more data, make the decision by setting a diferent value in a
dropdown list and fnally press the Commit button to confrm the deci-
sion. That seems rather obvious but last year I encountered exactly that
situation at a Dutch fnancial institution.
A common way to support a specifc task is through the use of a multi-
step wizard that guides a user along a potentially complex path in clear
steps with the right level of complexity.
Visualization
With all the data processing capabilities at our disposal, it is remarkable
in a disappointing way for how long we have surfaced raw data to our
end users, pretending that was the information they were looking for.
Frequently, the information was hidden in the data, and we left it as an
exercise to the user to extract the information and derive from it the
decisions and actions to be taken.
Visualization can be regarded as the presentation of information in a
way that enables the user to fulfll his or her responsibilities correctly
and completely In a timely, efcient, convenient manner. A key aspect is
THE RELEVANCE OF
THE USER EXPERIENCE
10/25 Lucas Jellema
74 OTech Magazine #3 May 2014
Visualizations can be used to highlight and categorize specifc informa-
tion, to provide context for certain information for example with time
and geographic location and to allow humans to exploit their talent for
visual comparison and extrapolation.
Many types of charts have developed over time to present information
in ways that make it accessible and interpretable. From simple line charts
(good for trending, interpolation and extrapolation) to bar charts and
pie charts (for simple comparison) and multi-dimensional displays such
as bubble charts and funnels. Timelines and maps are good ways to
provide either time or space context. Tag clouds can be used to very
rapidly assess the relative (occurrence based) importance. Tree maps
and sun bursts allow for multi-level hierarchical comparison.
Summarizing: plenty of ways are available to represent information.
And associated with these representation are interaction paths: drill
down, roll up, navigation, reorientation and other interactions further
enhance the interpretation of the information.
Figure: Tree Map that visualizes relative popula-
tion sizes across regions (Asia and Pacifc account
for more than half of the worlds population) and
countries (China and India host about two thirds
of the population of Asia and Pacifc. The largest
population sizes outside that region are found in
the USA, Russia and Nigeria. Note: the area size of
a rectangle is proportional to the population size.
Clicking on any rectangle triggers a drill down
that will present the next level of detail: Countries
within a Region and the main cities population-
wise per country
THE RELEVANCE OF
THE USER EXPERIENCE
11/25 Lucas Jellema
How should the user be informed about an action that needs taking?
What shape does the call-to-action take?
How should be the information required to start an action or make
decision be presented?
What data is the information derived from [and how]?
A little understanding of human biology will help to take the next step.
If we make sensible use of the various ways in which our body and mind
collect and interpret information, we can come up with visualizations
that allow for much quicker and better interpretations and reactions.
If we can unleash the associative brain and the unconscious back ground
processing of the human mind, we accelerate the information processing
capabilities of our end users. By leveraging our human ability to collect,
interpret and interact along multiple dimensions, we create an experi-
ence that is much more efcient, efective and pleasant. Todays technol-
ogy allow easy exploitation of such additional dimensions beyond plain
text based table presentations of reams of data. Some examples are:
Color, Size, Shapes/ Font, Story/Atmosphere, Icons, Sound, Animation,
3D presentation, Interaction (drill down, roll up, pivot).
A simple emoticon can convey so much meaning with so little efort.
It is a simple and powerful example of how a visualization can represent
certain data and information in a way that is very telling and easy to
interpret quickly.
75 OTech Magazine #3 May 2014
Visualization is can be used for making information easier to digest in very
operational ways, such as fnding information. The iPhone for example
allows me to not only browse through my photographs in long list with
meaningless names along with fle size and timestamp. It presents the
Photo Roll a list of thumbnails not very useful with my 4000+ photo-
graphs and a geographic presentation of where the photographs were
taken see next fgure. Using this map based overview, I can quickly drill
down to a specifc location and isolate only the pictures taken at that
location.

Figure: Visualization used to quickly locate and drill down to 1 out of 4387 pictures based on the location (Malta)
For a long time, charts were used in BI applications, but not so much in
normal OLTP applications. Technology restrictions existed, that made
creating and embedding those charts cumbersome and that caused prob-
lems with on the fy data aggregation. And coming from the window
on data approach to user interface design, charts were not an obvious
choice. Todays technology does not ofer (m)any constraints; HTML 5
has all the facilities we need to produce quite spectacular visual displays
of charts and other presentations and interactions. Processing data to
feed the visualizations, even in real time, is typically not a serious chal-
lenge. Designing the relevant visualizations and analyzing which data to
use for feeding them is probably a much harder challenge. And one well
worth addressing.
Figure: Screenshot from Oracle HCM Cloud R8 Simplifed UI with in context Visualization for quick aggregated data interpretation and
interaction
THE RELEVANCE OF
THE USER EXPERIENCE
12/25 Lucas Jellema
76 OTech Magazine #3 May 2014
Gamifcation
People like to engage in games. They create little contests everywhere.
Small bets, highest number of whatever, frst to reach. From my days at
Oracle I remember the frenzy around who would be the one to record
bug number 1,000,000. It is part of how humans are wired.
This penchant for gaming can be leveraged in enterprise applications.
Gamifcation therefore is the application of game design principles to
business software. Gamifcation motivates players to engage in desired
behaviors by taking advantage of peoples innate enjoyment of play.
Simple elements like scoring points for completing tasks [in time] and
introducing leader boards may already help stimulate users to improve
their performance. Just like the numbers of tweets and followers have an
efect on the average Twitter user, so will a well-chosen equivalent in the
enterprise application infuence the actions of the enterprise app users.
Creating epic stories or a journey-like equivalent with explicit challenges
and milestones to represent business processes has shown to engage
and motivate users.
Gamifcation will frequently work with visualizations, to create appealing
and easy to interpret representations of results and current status. The
next fgure shows an example of how Oracle intends to introduce game
elements into a future release of Oracle HCM Cloud to engage employees
in personal health and ftness as well as in skill training.
Figure: Simplifed UI, personal pages and gamifcation
aspects announced for Oracle HCM Cloud
THE RELEVANCE OF
THE USER EXPERIENCE
13/25 Lucas Jellema
77 OTech Magazine #3 May 2014
enterprise systems is rapidly growing. Smartphones, tablets, media play-
ers, desktops, displays in cars, wearables (shoes, glasses, garments-with-
sensors, watches), kiosks and other contraptions are used to engage
with automated systems. And through these mechanisms, various modes
of interacting are used including typing, mouse controlling, swiping
and gesturing, voice control, arm and leg movements (for example the
Kinect), head/eye coordination, simply walking by detectors and the giant
button that can be operated by any body part.
Mobility
Mobility is another corner stone of the simplicity, mobility and exten-
sibility tag line for the Oracle UX team. It refers to the fact that users
interact with enterprise applications at almost any time and from almost
any place, using a device that is convenient to them given those circum-
stances. From a user experience perspective, we need to work from that
situation and we can even leverage it.
The variety in devices that people use to connect to the internet and
interact with a variety of apps, social media, web applications and also
Figure: alternative interaction channels & devices
THE RELEVANCE OF
THE USER EXPERIENCE
14/25 Lucas Jellema
78 OTech Magazine #3 May 2014
the typical place for data such as in fight transactions, custom prefer-
ences and my personal notes and contacts.
Apps are not only used at any time, they are also active at any place. This
may include places where on-line usage is not possible or connectivity is
limited. Because of this and because a local cache may be unavoidable
to ensure decent performance- the enterprise apps will make use of local
data storage and synchronization challenges. On the brighter side the
fact that devices are used at any location combined with the fact that the
device itself is location aware, opens up opportunities for applications to
further facilitate the end user. By providing location based information
such as showing the information about the object to inspect that I am
currently close to or by understanding that the meeting summary I am
about to enter is in the context of the customer on whose premises I am
located.
Mobility requires applications to be designed either to run on a wider
range of devices adaptive and/or responsive design or to be designed
specifcally for each device. The latter obviously means being able to
make more use of the specifc device features at the cost creating less
reusable apps. For specifc tasks that are performed very frequently (vol-
ume) or whose performance makes all the diference (raw speed), it may
be well worth it to select a specifc device and create a dedicated app to
run on it. In many other instances, creating a standards based applica-
tion HTML 5 that has the ability to adapt to the device form factor,
Other capabilities of the plethora of devices can and should - be lever-
aged as well. Collecting audio, image and video associated with location
for example. Or starting phone calls or other forms of conversations
in the context of a business application. Navigation instructions to the
customer location that is selected in the enterprise application. Taking
physical measurements of temperature, distance, speed, noise levels
etcetera and feeding them into the enterprise application without human
intervention.
Because of the omnipresence of devices and therefore of the enterprise
[application], business processes can be conducted much more rapidly.
Users are in almost constant touch and communication, conferral and
decision making can take place much more rapidly. Humans are still a
slowing factor in the overall process we simply cannot compete with a
well programmed computer but we can increase our performance quite
dramatically.
End users make use of multiple devices to interact with the same busi-
ness process. One part of the process is handled on the desktop, the next
on a smartphone or tablet and yet another through a voice controlled
telephone system or a fngerprint or iris scan based authentication. This
means that the user experiences presented by our applications has to
function on and be tailored to a potentially wide range of devices and
interaction styles. The data used through the application on the various
devices is available on all devices and therefore on none. The cloud is
THE RELEVANCE OF
THE USER EXPERIENCE
15/25 Lucas Jellema
79 OTech Magazine #3 May 2014
screen size, interaction mechanisms such as mouse vs gesture results
in the right balance between development and maintenance efort and a
tailored device specifc user experience.
The Oracle UX team adopts a tablet-frst design approach, especially for
the 10% of the functionality from the 90:90:10 rule. The UIs designed for a
tablet will also render perfectly fne on a larger desktop display. They are
simple (as well as inviting and appealing) on the big screen as well as on
the small one.
The power UIs used by the power users those professionals that work
with the enterprise applications 90% of their time will typically run on
powerful desktops with large screens with most of the interaction mouse
and keyboard driven. These interfaces have less of a mobility require-
ment, at this moment.
Customization
One size does not ft all, at all. Having said that, creating individual appli-
cations for each niche of users, roles, departments, devices, screen sizes,
cultural backgrounds and geographic locations is simply not realistic.
What we need therefore is a way to create applications that know how to
adapt. Depending on the context in which they are used, they take on an
appropriate disguise.
THE RELEVANCE OF
THE USER EXPERIENCE
16/25 Lucas Jellema
80 OTech Magazine #3 May 2014
Personalization - Changes made by self-service users at run-time that
only afect that user. Can be made to new or changed artifacts.
Localization - Changes to provide specifc functionality for a given coun-
try or region, typically created at design time by product development
or as third-party extensions.
It is important to realize that in a cloud environment where a single ap-
plication instance is used by multiple organizations, some of the tailoring
steps are not available. Confguration for example can only be done to
fairly small extent per organization. Design time customizations cannot
be created per organization. However, all run time tailoring steps are
available in a SaaS environment as well as in the on premises situation.
Responsive and adaptive design are approaches specifcally targeted
at screen size and form factor. These are part of this chameleon like
behavior that we want to instill in our applications. To make our applica-
tions align with role, geographic location, language and culture, division/
department and other context factors, we have to embed customization
into the application. This means that during design and development of
the application, starting from the base functionality and look and feel, we
defne along each of the customization dimensions such as region/coun-
try/language, industry, role what the specifc adaptations should be in
the applications behavior and look & feel. These context specifc modif-
cations on top of a core application, are called customizations.
Having identifed the customizations, we also need an infrastructure in
our application [platform] that allows us to apply the relevant customiza-
tions at run time depending on the actual context in a specifc user ses-
sion. Of course every concurrent user session may have its own distinct
set of customizations applied, depending on its own distinct context.
In the world of Fusion Applications, any change on top of the core prod-
uct is seen as tailoring. Various types of tailoring are identifed:
Confguration - Setup step made by customers to alter the applications
in a way that has been pre-defned by the base product.
Customization - All changes to existing artifacts, either at design time
or run time.
Extension - All creations of new artifacts.
THE RELEVANCE OF
THE USER EXPERIENCE
17/25 Lucas Jellema
81 OTech Magazine #3 May 2014
capabilities comes from core ADF features along with the MDS (Meta
Data Service) complemented with business composers. The customiza-
tion framework in ADF supports both design time customizations (cre-
ated by developers) as well as run time customizations and extensions
(potentially created by business representatives).
At run time, the business composers in Fusion Applications are available
to create customizations. These composers are also available outside
of Fusion Applications, in various Fusion Middleware products such as
Extensibility
In the tag line Simplicity, Mobility and Extensibility, the latter is the catch
all term for any modifcation a business user may want to make to the
core application. These changes can be for an entire organization unit, for
a small team or for an individual. These changes are made to improve the
implementation for the users involved. They allow the user to work with
a more simplifed experience by leaving out elements that the user does
not have a need for or a more tailored experience through a presenta-
tion that is more intuitive to the user. Common examples of such changes
are to alter the terminology (text in prompts, titles, hints, ), to hide
items, to reorganize elements and to fne tune rules for highlighting and
alerting. True extension takes place when users create new elements
from additional derived felds to user defned fex felds, custom data
flters and reports and even entirely new and integrated business objects
with associated pages.
The simplest form of customization available to any organization using a
standard application would be the ability to defne the visual style of the
application for the organization using the logos, colors, fonts and other
organization specifc display style elements.
More elaborate customizations and personalization require a more
sophisticated mechanism that has to be embedded into the application.
Oracle Fusion Applications have customizations as intrinsic part of both
the applications as well as the application platform. Most of this platform
Figure: Appearance customization in Release 8 of HCM Cloud and Sales Cloud
THE RELEVANCE OF
THE USER EXPERIENCE
18/25 Lucas Jellema
82 OTech Magazine #3 May 2014
Not only does the Applications User Experience team thus provide the
ability to easily tailor applications with the simplifed user interface using
composers for the business analyst, but also the team provides guidance
for more complex extensions through its UX Direct program.
Personal cloud
This personal cloud is the back end tied to a specifc user that allows the
user to move across devices while participating in a business process or
transaction. For example to allow a shopping basket to be manipulated
on diferent devices. The personal cloud is somewhat similar to session
state in web applications. However, because it stretches across devices
and therefore clearly across physical sessions as well, it has to be handled
in a special way. Part of the personal cloud is also the collection of user
preferences and user specifc extensions that govern the personalized
look and feel a user experiences when accessing applications. When a
user specifes on one device that she wants to hide or reposition a feld in
a page that supports a certain task, then that same confguration should
be applied to that page on other devices, and even in a diferent app sup-
porting the same task.
Of course any customization made to the enterprise application today
should continue to exist across new releases of the core enterprise ap-
plication. My customizations in general should be carried forward until
such time where they do not make sense any longer, for example when
my customization atmpts to hide a feld that is removed from the based
WebCenter Portal (Page Composer and Data Composer), BI EE (BI Com-
poser) and BPM Suite (Process Composer). These can be used in custom
applications.
In the HCM and Sales Cloud products, on the Structure page, a business
system analyst can reorganize how the pages will appear in the user in-
terface by simply dragging and dropping them around the page. Renam-
ing functional areas and pages is as easy as typing over existing names.
Figure: Customizing the appearance of pages in the simplifed UI of HCM Cloud R8
THE RELEVANCE OF
THE USER EXPERIENCE
19/25 Lucas Jellema
83 OTech Magazine #3 May 2014
In general, we are going to think diferently about applications. Instead
of the large enterprise applications of today, we will see a proliferation
of small enterprise apps. These apps support a relatively small task or
process with a tailored user experience running against a highly reusable
back end with services and processes containing the business logic and
consolidated data. The apps are welded together in the run time environ-
ment to collectively form the user experience for an individual user. Each
user may work with a diferent collection of apps.
Enterprise apps should be small, focused chunks of functionality that
are used in an enterprise environment yet ofer a consumer experience,
similar to well-known apps from iTunes, Google Play or other app stores.
These apps should require no training for the end users and be as intui-
tive as an iPad App. The apps should efciently use information to help
the user derive insight and from there processed to decision and action.
Data is not relevant, it is merely the raw material that users typically are
not interested in and should not be bothered by. The app guides the user
to what [information] is relevant for example because it requires an ac-
tion (pending deadline) or a review (threshold crossed).
Just like consumer apps, these enterprise apps will typically have a rapid
evolution. They should also be considered almost throw away software:
if an app does not fully satisfy the users requirements, an organization
should have no qualms about replacing it with a new incarnation of the
same or even an entirely diferent app.
product altogether. Whatever the mechanism used to record and apply
the customizations, it should be able to work across upgrades of the ap-
plication.
The exact implementation of the personal cloud can vary. It could be held
in a public cloud environment provided it is secure or be located in the
private cloud of the enterprise data center where it can be stored in vari-
ous ways. Data in the personal cloud has to be rapidly available because
it is at the very forefront of the user experience. The personal cloud is im-
plemented through the MDS (Meta Data Services) in Fusion Middleware.
Rapid evolution
The way our users perceive applications is going through a major change
leap frogging from the mid-nineties way of enterprise IT thinking to the
consumer style approach of simple, mobile and extensible . What is more:
the change is not fnite. With the ongoing evolution of technology and
expectations, we will not reach a new status quo with our applications
and the user experience they ofer. Users have come to expect continuous
change. A new version every few months at least, perhaps far more often.
Last week, I got introduced to a Dutch bank that uses continuous im-
provement and delivery to rebuild its customer web site every 15 minutes
(in the development environment) and that releases to production every
two weeks. This of course takes a large degree of automated testing and
a very well organized DTAP environment and process.
THE RELEVANCE OF
THE USER EXPERIENCE
20/25 Lucas Jellema
84 OTech Magazine #3 May 2014
If you are building new applications, you have a great opportunity to im-
bue the applications with an optimized user experience from day one.
However, even if you have an existing application that you cannot com-
pletely overhaul, there is still much you do as Oracle has demonstrated
with the Simplifed UI that is basically a wrapper around or on top of an
existing enterprise application that while not necessarily ugly is not de-
sign according to the latest insights either. Such a Simplifed UI based on
90:90:10 and ofering intuitive paths into the existing application are rela-
tively easy to achieve. To a large extent, such a UI could be created using
a diferent technology from the base application that it wraps because
the two are linked but not interwoven in the UI itself.
The Simplifed UI in Oracle Cloud Applications is created using ADF
(Application Development Framework). Relatively new components
SpringBoard, Cards, PanelDrawer and Vertical Tabs along with the Data
Visualization Tags (DVTs) are used heavily from creating this UI. These
components are available to anyone as part of ADF and even ADF Essen-
tials (the free edition of ADF). Oracle is working on a special developers
kit for Simplifed UI. This is a soon to be released toolkit that helps devel-
opers quickly create their own simplifed UI.
The Oracle Applications User Experience team shares many resources on
its UX Direct website (http://www.oracle.com/us/uxdirect). On this site,
the user centered design process is detailed. Design Patterns and Guide-
lines are introduced. Many tools such as templates and checklists are
An important part of the user experience of consumer apps and there-
fore of the next generation of enterprise apps is the notion of real time.
Push notifcations, informing the user of events almost instantaneously,
is at the heart of many consumer experiences. From email and whats
app to Wordfeud and Twitter, these notifcations drive much of the users
actions. A similar experience is sought for in the enterprise apps. New
tasks, questions and status changes should be handed to the enterprise
user in near real time. The enterprise users may well require collaboration
in the enterprise environment in a similar style as they are used to in their
social media dealings (Facebook, Twitter, LinkedIn, and so on). Or they
may even want the worlds to fuse together merging notifcations from
their personal sphere with those that are work related.
Distribution of enterprise apps, especially those that are used natively on
devices is a special challenge, one that increases with the high app turno-
ver rate that is envisioned. When talking about continuous build, delivery
and improvement, we have to ensure that distribution of the app when it
is released is very much part of that efort.
How to get going
User experience as described in this article is applied by Oracle itself, to
its Cloud Applications start with and increasingly to all its products. We
can do much the same in the custom applications we create. The same
principle apply such as 90:90:10, simplifed UI, one size does not ft all,
simplicity, mobility and extensibility; visualization, glance|scan|commit.
THE RELEVANCE OF
THE USER EXPERIENCE
21/25 Lucas Jellema
85 OTech Magazine #3 May 2014
provided, to help ensure that no essential steps in the design process are
missed.
On practical level, for example the extensive set of Oracles ADF Rich Cli-
ent User Interface Guidelines (http://www.oracle.com/webfolder/ux/mid-
dleware/richclient/index.html) will be valuable for many ADF UI develop-
ers.
Another interesting resource is the Oracle Usable Apps For Developers
section (http://bit.ly/1eppv2K) that introduces and guides developers into
the use of the UX concepts and the best practices Oracle provides.
Figure: The Oracle UX Direct Design Process poster that can be downloaded from http://www.oracle.com/
us/uxdirect
THE RELEVANCE OF
THE USER EXPERIENCE
22/25 Lucas Jellema
86 OTech Magazine #3 May 2014
Fusion Middleware and other Oracle technologies for UI development
Oracle ofers various tools and technologies for developing and running
user interfaces.
Since 2004, frst as HTML DB, there has been APEX. Browser based devel-
opment and run time, ideally suited for rapid development and delivery.
APEX leverages HTML 5, increasingly more so in the APEX 5 release, due
later this calendar year. Note that APEX can also be used to implement
the REST services on top of the database that other UI apps may want
to leverage. APEX plays an important role in the Oracle Database Cloud
both for the administration of that cloud environment and for the devel-
opment of cloud based UI applications and REST services. The latter can
be consumed by components running on the Oracle Cloud, some other
public cloud, or on premises or by UI apps running on a desktop or mobile
device.
Oracles premier application development framework is ADF, used for the
development of the vast majority of Oracles own user interface applica-
tions, such as Fusion Applications. ADF provides several for developing
user interfaces:
ADF Di (Desktop Integration) for creating Excel applications against an
ADF back end
ADF Swing (deprecated as of release 11gR2)
ADF Faces for implementing rich Java EE web applications
ADF Mobile for developing semi-native, cross device mobile apps
In her blog article Six Things You Can Do Today to Jump-Start Your User
Experience for Enterprise Applications (https://blogs.oracle.com/VOX/en-
try/six_things_you_can_do), Misha Vaughan from the Oracle Applications
User Experience team explains how good usability practices are com-
pletely possible even on the smallest budget, and with no UX staf. She
introduces six steps that are available to any organization at the cost of
only a little time:
1. Identify who are the users of the application? Per role: what do they
do, how/when/why/with what. Use the cheat sheet from UX Direct in
this step
2. Work smarter jump start with the design patterns already developed
and proven by the UX team and available from UX Direct
3. Sketch create wireframes before starting to actually code
4. Visual Design think about general visual principles including color,
order of content, (check https://www.youtube.com/watch?v=kNcM8r
wz5gQ&feature=youtu.be for an introduction)
5. Get feedback on the wireframes and the visual design from real end us-
ers but do not let the designer who created those interview the end
users herself, to prevent biased results
6. Iterate. Re-design and re-test, as resources permit. Do not wait until
the entire design (and even development) phase is complete before re-
connecting with the end users.
THE RELEVANCE OF
THE USER EXPERIENCE
23/25 Lucas Jellema
87 OTech Magazine #3 May 2014
to hide felds, reposition page elements and change prompts and other
boilerplate text at run time. The personal cloud is implemented through
the MDS (Meta Data Services) in Fusion Middleware.
Creating a simple, mobile and extensible UX is very possible with ADF
Faces, as demonstrated for example with the FUSE style in Fusion Appli-
cations HCM R7. ADF Faces components springboard, paneldrawer and
vertical tabs for example are used to create the icon rich, intuitive user
interface that very naturally guides a user to a specifc action.
Oracle launched ADF Mobile in 2012. Through ADF Mobile, developers
can create a cross device mobile app, that renders HTML 5 and also has
access to on device services such as email, contacts, camera and GPS.
ADF Mobile apps are developed in JDeveloper in a way that is very similar
to the ADF Faces development experience. ADF Mobile apps run in an on
device Java Virtual Machine. They access backend services frequently
the same RESTful services accessed by rich HTML 5 apps.
There seems to be a move within Oracle not yet formally announced
to rebrand this mobile solution to Oracle Mobile Development Frame-
work, to position it more broadly as the strategic solution from Oracle
for developing mobile apps and not focus too much on the existing ADF
developers community. The Oracle Mobile Cloud platform is closely as-
sociated with this initiative.
ADF Faces is currently by far the most widely used of these options. ADF
Faces 11g has been available since 2008. It is based on the Java EE stand-
ard of JavaServer Faces. That very name reveals a lot about the archi-
tecture of ADF Faces: even though the client has become richer (with
increasingly more dynamic HTML manipulation going on in the client and
more client/server interaction handled through background, AJAX-style
interactions) the role of the server is still very large. ADF Faces applica-
tions are stateful, with session state being held in the server. The per-
session footprint is quite substantial with ADF Faces user interfaces. This
architecture is very useful in large transactions and with complex busi-
ness logic for data intensive operation by power users. It can be much
less useful for light weight, read only, self-service style applications.
ADF Faces is further evolving, for example with explicit support for tab-
lets (including touch based interactions and adaptive fow layout), use
of HTML 5 for rendering of data visualizations and some streamlining for
better use of ADF Faces for public sites (for example smaller initial JavaS-
cript footprint).
The extensibility of UI apps is supported in ADF Faces to a large degree.
Both personalization at run time and customization adapting the ap-
plication at design time or run time for specifc roles, user groups, loca-
tions or other conditions is catered for. Oracle provides ADF Faces
developers with many facilities to build dynamic customization into the
applications, for example to enable application managers or end users
THE RELEVANCE OF
THE USER EXPERIENCE
24/25 Lucas Jellema
88 OTech Magazine #3 May 2014
vices. A RESTful JavaScript Client wizard is also part of NetBeans, allow-
ing generation of JavaScript code snippets for interacting with a RESTful
web service. See https://netbeans.org/features/html5/ for details.
Summary
Whether you have read this article on paper, on your e-reader, desktop
browser or tablet, on your smart phone while riding the subway, or hav-
ing it read out aloud to you in the car the fact is undeniable that there
is an increasing number of channels through which users interact with IT
systems. Any user may use a range of diferent devices, even for perform-
ing the same task. Each device requires a device specifc style of interac-
tion from mouse to voice driven, from hand gestures to head shakes.
The Oracle Mobile Suite has been announced, and is available for
download as well as on the pricelist. This suite contains ADF Mobile, the
Oracle Service Bus as well as all Applications Adapters. At this point, it
seems nothing more than a bundling of existing components that enable
development of mobile solutions, albeit at a much higher price and the
sum of the individual components. For now it seems primarily a market-
ing statement about the prominence of mobile development front end
(UI) and [especially] back end - in Oracles product strategy.
Note that NetBeans, one of the IDEs ofered by Oracle and part of the
Sun Microsystems inheritance, has strong support for HTML 5 develop-
ment, including JavaScript and CSS 3. One of its features is live web pre-
view: two-integration between Chrome browser and the NetBeans IDE,
meaning that every change is exposed instantaneously in the browser
and that any DOM element in the browser can be traced back to a code
line in the IDE. Netbeans also ofers preview for many diferent page
sizes, ratios and resolutions to inspect UI design for many diferent de-
THE RELEVANCE OF
THE USER EXPERIENCE
25/25 Lucas Jellema
89 OTech Magazine #3 May 2014
Oracle NoSQL
PART 2 James Anthony
www.e-dba.com
twitter.com/jamescanthony
www.linkedin.com/pub/
james-anthony/1/3a4/101
90 OTech Magazine #3 May 2014
In the frst part of this series we discussed the
basic types of NoSQL database, what I dont think I
made clear at the time is that even where a data-
base might fall into the category (take for example
Cassandra and HBase in the column family data-
base) they arent architected the same and may
have diferent capabilities. This shouldnt come as
a major surprise as anyone in the RDBMS world
knows Oracle and SQLServer are two vastly difer-
ent beasts that have the paradigm of being Rela-
tional Databases in common.
In this article Ill discuss frst CAP and ACID and
how these were perceived as weaknesses of the
RDBMS and led to the development of NoSQL
databases, then well dive in further to explore the
KeyValue type and in particular Oracles implemen-
tation, the Oracle NoSQL database.
Im pretty sure that most people reading this
article have at least some familiarity with both CAP
and ACID, but lets just do a quick recap.
CAP theorem is based on Consistency, Availabil-
ity and Persistence, and deals specifcally with
distributed databases. Up until this point Id not
really talked too much about distribution of data-
bases, so at this juncture its worth bringing this
up. NoSQL databases are typically (although not
always) used in a distributed manner, with data-
base servers that are physically separated coupled
together to form a single logical entity. Giving an
example from the relational world, you could think
of Oracle master-master replication as just one
example of a distributed database system, with
multiple geo-graphically separated databases act-
ing as a single entity.
The need for distribution arose because many
of the NoSQL databases in use today come from
large internet scale organisations, and therefore
the ability to distribute databases across multiple
data centres, in multiple countries was a primary
goal to ensure a) high availability, b) global data
distribution for both locality of service and data
protection and c) suitable load balancing of work
to ensure no single location/DC/Server represents
a pinch point.
In CAP Theorem we discuss a system having these
three guarantees;
Consistency: When a system is clustered/
distributed the ability of all nodes within the
distributed system to see the same data at the
same time
Availability: The ability of the system to service
requests for data
Partition (Tolerance): The ability of the distributed
system to deal with loss of some part of the overall
solution such as a message or node
What youll notice about these is they are func-
tions of the distributed database in general. A full
debate on the proof of CAP theorem is beyond this
article, in fact this is a debate that still rages for
many people and looks set to continue to do so.
Briefy put, the view was/is that only 2 of these 3
tenants can be maintained at any time. Let me give
you and example in the traditional Oracle world to
better illustrate.
Take an example of a 2-server confguration with
plain old Oracle replication confgured. In this
environment we have our frst decision to make,
ORACLE NOSQL
ARTICLE 2
1/12 James Anthony
91 OTech Magazine #3 May 2014
namely that over consistency. If we want both
nodes to always see the same data then we need
to confgure synchronous replication (2 Phase
Commit - 2PC) such that any transaction that gets
committed on either node is immediately and
synchronously replicated to the other node. If we
dont, and chose asynchronous replication instead,
we have relaxed one of our guarantees immedi-
ately and we risk an inconsistent view of the data
depending on which node is queried.
Figure 1
So now we have 2PC confgured, and we adhere to
our consistency guarantee, but weve broken one
of our other guarantees that of Partition toler-
ance. Why? Because in order to preserve consist-
ency across the nodes 2PC requires that the data
is committed on both sides. Therefore loss of one
side has an impact on the ability of the other side
to be able to process, stopping it from doing so! If
we take the alternative model and choose asyn-
chronous replication, we can continue to operate
if one side is down (and therefore achieve partition
tolerance) but we lose our consistency guarantee
in order to uphold this. Therefore the general prin-
ciple is that in CAP theorem we can provide a 2 of
3 model but must relax one of the guarantees.
Not being completely relevant at this point, but
worth introducing is a phrase youll hear a lot of
working in and around NoSQL solutions Even-
tual consistency. Many NoSQL database relax the
consistency guarantee, instead ofering to make
the system consistent over a period of time, such
as handling a failure of a node (or separation of
multiple nodes due to network outage) by syn-
chronising the data upon restoration and therefore
making the data eventually consistent. For those
interested I thoroughly recommend reading the
Amazon Dynamo paper and hope that like me
youll be suitably impressed by the elegance of
semantic and syntactic reconciliation.
http://www.allthingsdistributed.com/fles/amazon-
dynamo-sosp2007.pdf
Whilst CAP refers to the system in general, ACID
concerns itself with transactions within the sys-
tem.
1) Transaction A comes in from user to the left hand side database
Because no 2PC is in place the transaction is queued for later delivery
to the second database
2) Shortly after the transaction commits on the left hand side a user
connects to the right hand side database and queries the informa-
tion. They are given the before image
3) Asynchronous replication now occurs and the right hand side is
updated, BUT our user has seen an inconsistent view of the data
ORACLE NOSQL
ARTICLE 2
2/12 James Anthony
92 OTech Magazine #3 May 2014
ferent paradigm, but in fact I see this as less of an issue. Oracle has always
had asynchronous replication with confict resolution rules, and more
recently Streams and GoldenGate provide even more elegant solutions,
all of which can be considered to provide eventual consistency with the
ability to provide partition tolerance. In many cases the perceived weak-
nesses were in comparison to the release of MySQL that was available at
the time. None the less, and leaving the pseudo-religious arguments
Figure 2
I would no doubt start by carrying on along the rebuttal line, lets keep
discussing these drawbacks and the mechanisms deployed by NoSQL to
resolve this.
Atomicity: Each transaction is all or nothing. If you are updating 50 rows,
then all 50 rows update or the entire transaction is rolled back. Each trans-
action is atomic, that is to say indivisible.
Consistency: Unlike the C in CAP consistency here refers to moving from
one valid state to another, such that the transaction committed does not
violate integrity constraints or other defned rules.
Isolation: This ensures that multiple concurrent executions of transactions
will result in the same end state that would occur should those transac-
tions be processed in a serial manner
Durability: Once committed a transaction will be permanent, across
failures such as power or system crash. The Oracle redo log write ahead
logging model is an excellent implementation of this.
Now that weve done a quick recap of the CAP and ACID properties we
will discuss how these were perceived (and notice my use of that word
as particularly relevant in my opinion) as weaknesses in the relational
database that led to the development of the NoSQL movement that is so
strong today.
Going back to the example I gave regarding CAP and 2PC, it is easy to see
how a 2PC model is hard to envisage as a production solution for a global
database with high volumes of trafc. Not only would loss of one data-
base (or isolation due to network or other factors) stop the entire system
processing, but the impact of network latency on each and every transac-
tion would inevitably become a pinch point. Much is made in many of the
NoSQL papers about the issues with 2PC and the need to move to a dif-
1) A put operation changes data (insert, update or
delete)
2) Replication occurs from the node receiving the put
to the upper of the replicas. For some reason the
lower replica is unavailable (outage or network parti-
tion). In this case due to network partition meaning
the lower replica is still running
3) Read (get) requests occur at both locations, notice
how the lower replica will serve an older version of
the data
1) At some later time the network partition is resolved
and the NoSQL solution will resolve the consistency
by replicating the change to the remaining replica.
The system has become eventually consistent.
ORACLE NOSQL
ARTICLE 2
3/12 James Anthony
93 OTech Magazine #3 May 2014
A second technical issue that NoSQL databases focus on is the need
for a less rigid structure for data models than is enforced by relational
models. This has seen the rise of document databases such as MongoDB
and CouchBase. The standard table structure with a relatively rigid col-
umn format was seen as restrictive to rapid development models. Many
of these databases adhere to a document data model, which stores all
information in the form of documents (where all the information about
an entity, such as a person, is held within a single document -- an entirely
de-normalised approach) have to sacrifce ACID properties and do away
with transactions across multiple documents entirely! Others needed to
cope with a variable number of columns in each row. The classic example
being the number of links within a web page, each column represents a
diferent page but it is unclear how many links a given page might con-
tain -- therefore a fexible number of columns is required. Interestingly
many people may now be aware that Oracle are extending the database
in 12.1.0.2 to support JSON document models, but with all the benefts of
the Oracle RDBMS in terms of ACID support, back and recovery, manage-
ment etc.
Other NoSQL databases (perhaps most notably the BigTable based data-
bases) needed both highly distributed and fault tolerant solutions, as well
as dealing with massive data volumes and a data model that needed to
incorporate a multi-dimensional element (time).
Whatever your personal view on these restrictions, whether technology
both software and hardware has improved to get around some of these,
it is unquestionable that the NoSQL database is here to stay. In the next
part of this article we will discuss the Oracle NoSQL implementation.
ORACLE NOSQL
DATABASE
Oracle NoSQL Database is a KeyValue (KV) store, much in the same vain
as Amazon Dynamo and Voldemort but with some signifcant advantages
we will discuss later on. In the previous article in this series we discussed
what KV storage looks like, so hopefully you can remember that far back!
The salient points are: The value can be anything we like, a simple alpha-
numeric value (a name perhaps), a serialisation of a session state (a really
good use case), or an object such as a JSON document (which I will use in
many of my examples). Data is queried using a get command (passing
in the key) and written to the database using a put statement (passing
in the key and value obviously), there is no query language in the vain of
SQL.
The Oracle NoSQL database is designed from the ground up to be distrib-
uted, in fact whilst there is a lite version that can run on a single node
for testing, you really arent going to deploy the Oracle NoSQL Database
in a guise with less than 3 nodes (I will shortly discuss the architecture of
the solution). This means that the concepts of replication and consistency
immediately come into play.
ORACLE NOSQL
ARTICLE 2
4/12 James Anthony
94 OTech Magazine #3 May 2014
So, lets get back to some NoSQL concepts and explain partitions and
sharing in this context.
Partitioning
Weve already discussed how NoSQL solutions are typically distributed,
therefore the question becomes how does the system decide how to
distribute data. This is called partitioning.
When a value is stored a hashing algorithm is applied to the key and a
partition ID is derived based on this. A few relevant points are:
A single partition will contain multiple keys
A single replication node can contain multiple
partitions
This last point is especially relevant, and you will want multiple partitions
per node. Why? Well lets say we start with a 4-node cluster and we de-
fne 4 partitions. As key/value pairs are inserted the hashing algorithm will
equally balance keys between the diferent nodes, all good so far. Then
we decide we want to scale out and add more nodes, so we bring it two
more servers, but how can we spread our 4 partitions across these now 6
nodes? The answer is we cant as the number of partitions is fxed. Com-
pare this to a situation where we defned 24 partitions, then in the initial
4-node cluster each node would have 6 partitions (4 nodes * 6 = 24).
Then as we expand the cluster with two additional nodes, the partitions
just move around and we end up with 4 partitions on each node (6 nodes
Architecture
Firstly, a big thanks up front to the Oracle NoSQL product management
team, Im going to be using their illustrations throughout this to save me
the need to recreate them. One of the things youll notice when you look
at the docs is a lot more diagrams than the RDBMS has in its documenta-
tion these days, and I think thats a great way to illustrate what is a new
topic to most people.
Within the Oracle NoSQL database the database is referred to as the
KVStore (Key-Value Store), with the KVStore consisting of multiple com-
ponents, but at high level the store is broken down into multiple storage
nodes. A storage node is a server (or a VM) with local disks, so unlike RAC
there is no need for a clustered fle system, SAN or NAS -- you just provi-
sion local disks allowing deployment on truly commodity based hard-
ware.
Within each storage node are replication nodes, were going to discuss
replication shortly, but all you need to know at this point is that the num-
ber of replication nodes within a storage node is determined by its capac-
ity. This gives Oracle NoSQL the ability to run across a bunch of nodes
that have diferent capacities (based on CPU and IO specs), meaning you
can start small and grow the cluster out with newer hardware without
worrying about the new kit being constrained by the metrics of the older
servers.
Going down one more level, each replication node hosts one or more
partitions (almost always more than one partition).
ORACLE NOSQL
ARTICLE 2
5/12 James Anthony
95 OTech Magazine #3 May 2014
Like some other NoSQL implementations the Oracle solution has a con-
cept of a single write master for data. For each shard a master node is
elected to which all writes are performed, the master node then copies
the data to the replica nodes (later on we will discuss how this can be
controlled). Whilst write trafc is performed against this single node (and
remember this is a single node per shard, having multiple shards means
we balance write activity for diferent keys across multiple nodes), reads
can be performed against any replica in the shard, which allows us to hor-
izontally scale read workloads. Given this is Oracle you probably expect it
anyway, but just to state it explicitly, a failed master node will automati-
cally be detected and one of the replica nodes will then become the new
master, all transparently happening in the background.
By balancing multiple shards and multiple partitions we can ensure we
have sufcient capacity for write activity and future expansion. A key fea-
* 4 = 24). These diferent storage nodes are referred to as shards (as we
have separated the data physically with each node having access only to
this shard of data).
Replication
The observant amongst you are probably already thinking if the data
isnt shared how is it protected? This is where replication in NoSQL solu-
tions comes in (here we will discuss the Oracle NoSQL implementation,
but its similar for most of the NoSQL solutions out there). Within the
Oracle NoSQL Database you confgure a replication factor for each KVS-
tore, which controls the number of other storage nodes onto which the
key/value pair will be copied (this is why we have replication nodes inside
storage nodes).
Take a look at Figure 1 from this you can see how data is copied to two
other nodes from the master (more on this in a moment) based on a rep-
lication factor of 3.
ORACLE NOSQL
ARTICLE 2
6/12 James Anthony
96 OTech Magazine #3 May 2014
Within the Oracle NoSQL database this is easy to process, and Im going
to use some pseudo java code to build the example;
// We create an array of String (VARCHAR2 style values) for the key
rrayList<String> majorComponents = new ArrayList<String>();
// Defne the major path components for the key
majorComponents.add(competitionId);
majorComponents.add(matchId);
// Create the key
Key myKey = Key.createKey(majorComponents);
// Do some work here to defne the value we store

// Now store the key value with a simple put request


NoSQLDataAccess.getDB().put(myKey, <VALUE>));
However within each match I now have multiple events coming in, each
again with its own unique event ID. This is where I can use the minor key
component, adding this as the minor key element:
ArrayList<String> majorComponents = new ArrayList<String>();
// This time we also have a minor components element
ArrayList<String> minorComponents = new ArrayList<String>();
// Defne the major and minor path components for the key
majorComponents.add(competitionId);
majorComponents.add(matchId);
minorComponents.add(eventId);
// Create the key
Key myKey = Key.createKey(majorComponents, minorComponents);
// Do some work here to defne the value we store
ture of the Oracle NoSQL Database is the ability to horizontally scale read
workloads using this method, and scaling is indeed linear in this fashion.
Major/Minor Keys
One of the key features of the Oracle NoSQL database is support for
multi-part keys, with the ability to specify both major and minor compo-
nents to the key. Lets build an example to illustrate, based on one of our
real world deployments.
Imagine we are processing an incoming feed of information relating to
sporting events. We will use soccer (football to us Brits!) as the game in
question. The feed sends us real time information on the events happen-
ing within the game, such as a goal, free kick, throw in etc. and we want
to process some type of action within our application based on this.
Firstly the feed provides us with an ID for the tournament or competi-
tion in question (World Cup, Premier League etc.), allowing us to identify
which tournament any incoming entry is for. Each incoming entry also
has a unique identifed for the given match/fxture (we can have mul-
tiple matches happening at once and clearly will have a large number
of matches over time). We can see that weve got two parts to our key
already:
Part 1: The competition ID
Part 2: The match ID
ORACLE NOSQL
ARTICLE 2
7/12 James Anthony
97 OTech Magazine #3 May 2014
// Now retrieve the records.
SortedMap myRecords = NoSQLDataAccess.getDB().multiGet(myKey);
I can also use the full major and minor keys, again using an example; in
this case to check if weve already received this event for the given match
in the given competition (in order to perform duplicate checking)
ArrayList<String> majorComponents = new ArrayList<String>();
ArrayList<String> minorComponents = new ArrayList<String>();
// Defne the major and minor path components for the key
majorComponents.add(Game);
majorComponents.add(competitionId);
majorComponents.add(matchId);
// Add the Event ID as the minor Key
minorComponents.add(eventId);
// Create the key
Key myKey = Key.createKey(majorComponents, minorComponents);
// Now retrieve the record. We use a single get as opposed to a
// multi-get here as we only expect one value
ValueVersion vv = NoSQLDataAccess.getDB().get(myKey);
A quick but very important note: The V3 release of the Oracle NoSQL
Database provides a table mapping feature and much of this key design
goes away! Well discuss just how powerful this is in a future article.

// Now store the key value with a simple put request


NoSQLDataAccess.getDB().put(myKey, <VALUE>);
So whats the advantage of using the minor key like this? Well, lets take
the situation where I later down the line want to get ALL the events for
a given match. what I can do now is execute an operation to get the
events using just the major key
majorComponents.add(competitionId);
majorComponents.add(matchId);
// Create the retrieval key
Key myKey = Key.createKey(majorComponents);
// Now retrieve the records.
SortedMap myRecords = NoSQLDataAccess.getDB().multiGet(myKey);
Or perhaps I want to get all of the match IDs for a given tournament (for
example as part of the process to show a historical fxture list), in this ex-
ample I can do a get operation using only the frst part of the major key.
majorComponents.add(competitionId);
// We no longer need the following line for the 2nd part of the key
// majorComponents.add(matchId);
// Create the retrieval key
Key myKey = Key.createKey(majorComponents);
ORACLE NOSQL
ARTICLE 2
8/12 James Anthony
98 OTech Magazine #3 May 2014
Master Node only
All Nodes (in the replica set) basically enforcing synchronous replica-
tion
A majority of nodes (in the replica set)
Additionally the durability policy in Oracle NoSQL also allows you to con-
trol the level of write persistence for the write operations to the master
and replica nodes. You can do this by choosing whether the data is writ-
ten to a) the local in-memory bufer, b) the OS bufer cache, or c) wheth-
er we wait for it to be written all the way to disk. We will show some of
these examples in a later article, but sufce for now to say they ofer a
great deal of fexibility in trading of performance and durability. The abil-
ity to control these at the transaction level allows for certain operations
to be performed in a fail safe mode, whilst others can sacrifce durabil-
ity in favour of performance.
VERSIONS &
CONSISTENCY
Another concept of NoSQL databases that is slightly diferent to that of
the traditional relational database is that of Versions. When we insert
data into the KV store it is implicitly given a version in the system, lets
illustrate with an example, where we insert a KV pair with a Key a A, value
CONSISTENCY
AND DURABILITY
GUARANTEES
Going back to the discussion on replication one of the features of many
NoSQL databases is that unlike a traditional relational database it is pos-
sible to choose the level of durability (write persistence) and consistency
(read consistency) of the data at both system level, and then override
this per operation level.
Lets explore how that works. Firstly remember back to when we dis-
cussed replication and replication factors, and in our example we had a
replication factor of 3. This mean once the data had been written to the
master node, it will then be written to two additional replica nodes. Clear-
ly doing these extra write operations has an overhead, especially if the
nodes are separated by any distance due to network latency. In certain
cases we may not want our transaction to wait for these additional write
operations to complete, so we can tune our durability policy and change
this to a diferent value. If we choose a durability value of 1, then once the
data is written to the master node, the operation will return control to
the calling program, with the replication happening in the background.
In the Oracle NoSQL database we have 3 acknowledgement-based dura-
bility models
ORACLE NOSQL
ARTICLE 2
9/12 James Anthony
99 OTech Magazine #3 May 2014
Now when we read back the value for key A, depending on which node
we get the data from we get diferent values (remember at the start of
the article we discussed eventual consistency well here is the down-
side!). So how do we manage this?
Well this is where versions come in! When we retrieve the data we can
choose to check the version number, since get operations return
both the data and the record version. We can then compare this version
number to what we held for the insert operation and only proceed if we
are working from the current version. Again, without repeating myself,
I seriously recommend you go and read that Amazon Dynamo paper, as
see how semantic and syntactic reconciliation work.
Oracle NoSQL also allows for other consistency guarantees based on:
Absolute: Read from the master node. Unlike other NoSQL solutions with
no pre-defned master this is an option. We know writes for a key will
go through a master for the shard, so servicing requests from this will
always return that latest, most consistent value.
of B, and it is then given a version of 1, we can represent this as;
Now, a process comes along an updates the KV pair (which really means
an update to the Value), we arent concerned with what the data has
changed to, just that its version has changed;
So far so good, but how does this relate to the real world usage? Lets
take an example where we have relaxed our durability guarantees and
were using asynchronous replication. Its conceivable that we have dif-
ferent versions of the value for the key on diferent nodes;
ORACLE NOSQL
ARTICLE 2
10/12 James Anthony
100 OTech Magazine #3 May 2014
to work around ACID often stating they dont need transactions (re-
member most NoSQL databases sacrifce transaction support). However
in my experience you might not need them now but you will at some
point.
Integration: Not unique amongst NoSQL databases is Hadoop integration,
but additionally the Oracle NoSQL database can integrate directly with the
Oracle RDBMS (using external tables) and Coherence. Being able to cross
the chasm from NoSQL to SQL world is just a great feature. Allowing me
to query across my data sources, and stop me from just getting another
silo.
Transparent Load Balancing: Again perhaps not unique but certainly not
prevalent in the NoSQL is the fact that the NoSQL driver provided by
Oracle performs all my load balancing for me (something well discuss in a
future article)
Free: Yep! You read that right. Oracle ofer the NoSQL database in two
favours. The paid for version (which is actually not that bad by Oracle
terms), but also the community edition (CE) which is totally free!
WHATS NEW
IN VERSION 3?
Oracle recently announced the availability of Oracle NoSQL Database V3.
Whilst weve had some time to look at this weve not deployed this into
None: Read from any node, without concern as to the most recent state
of the data
Time Based: This is based on the lag (as defned by time) of any replica
from the master (for example if the replica is no more than 1 second out)
ORACLE NOSQL
DIFFERENTIATORS -
ACID
Oracle NoSQL database has quite a few diferentiators in my eyes, these
include;
Enterprise Grade Support: Always a tricky subject whether to pay for
something! However, one of the problems for many organisations in sup-
portability, after all Google is a search engine not a support tool. Being
able to fall back on Oracle support means organisations looking to deploy
in the brave new NoSQL world might have to rely on Oracle, but they can
rely on Oracle.
Proven storage engine: One of the attractions to us when frst deploying
NoSQL was that we were already POCing based on the Berkeley DB as
we know the track record of that. Oracle NoSQL uses BerkeleyDB as its
persistence engine.
ACID support: For me this is the big one. Whilst I get why people wanted
ORACLE NOSQL
ARTICLE 2
11/12 James Anthony
101 OTech Magazine #3 May 2014
any of our existing implementations, but we plan to soon! The reason?
Some of the new features provide signifcant beneft including;
Increased Security: OS-independent, cluster-wide password-based user
authentication and Oracle Wallet integration enables greater protec-
tion from unauthorized access to sensitive data. Additionally, session-
level Secure Sockets Layer (SSL) encryption and network port restric-
tions deliver greater protection from network intrusion.
Usability and Ease of Development: Support for tabular data models
simplifes application design and enables seamless integration with fa-
miliar SQL-based applications. Secondary indexing delivers dramatically
improved performance for queries.
Data Center Performance Enhancements: Automatic failover to metro-
area secondary data centers enables greater business continuity for
applications. Secondary server zones can also be used to ofoad read-
only workloads, like analytics, report generation, and data exchange
for improved workload management.
ORACLE NOSQL
ARTICLE 2
12/12 James Anthony
102 OTech Magazine #3 May 2014
ASM_METRICS.PL
UTILITY USE
CASES
Bertrand Drouvot

twitter.com/BertrandDrouvot
fr.linkedin.com/in/bdrouvot
103 OTech Magazine #3 May 2014
In the winter 2014 edition of the OTech Magazine I introduced the asm_
metrics.pl utility and explained how it works (See http://www.otechmag.
com/magazine/2014/winter/OTech%20Magazine%20-%20Winter%202014.
pdf#page=114 for more details)
I created this utility because when I need to deal with the ASM I/O statis-
tics, the tools provided by Oracle (asmcmd iostat and asmiostat.sh from
MOS [ID 437996.1]) do not suit my needs. The metrics provided are not
enough, the way we can extract and display them is not customizable
enough, and we dont see the I/O repartitions within all the ASM or data-
base instances into a RAC environment.
To summarize, the script connects to an ASM instance and takes a snap-
shot each second (default interval) from the gv$asm_disk_iostat cumu-
lative or gv$asm_disk_stat and computes the delta with the previous
snapshot. In this way we get the following real-time metrics based on
cumulatives metrics:
Reads/s: Number of read per second.
KbyRead/s: Kbytes read per second.
Avg ms/Read: ms per read in average.
AvgBy/Read: Average Bytes per read.
Writes/s: Number of write per second.
KbyWrite/s: Kbytes write per second.
Avg ms/Write: ms per write in average.
Into the spring 2014 issue I will cover some use cases of the utility.
Before to see use cases, lets have a look at the help page of the ASM
metrics utility:
In the help-instructions you can see that there are a few parameters that
you use with the ASM Metrics Utility.
1. You can choose the number of snapshots to display and the time to
wait between the snapshots. The purpose is to see a limited number of
snapshots of a specifed amount of wait time between snapshots.
2. You can choose on which ASM instance to collect the metrics thanks
to the -INST= parameter. Useful in RAC confguration to see the reparti-
tion of the ASM metrics per ASM instances.
3. You can choose for which DB instance to collect the metrics thanks
to the -DBINST= parameter (wildcard % allowed). In case you need to
focus on a particular database or a subset of them.
ASM_METRICS.PL
UTILITY USE CASES
1/6 Bertrand Drouvot
104 OTech Magazine #3 May 2014
9. You can sort based on the number of reads, number of writes or num-
ber of IOPS (reads+writes) thanks to the -SORT_FIELD= parameter (so
that you could for example fnd out which database is the top respon-
sible for the I/O). So that you can fnd the ASM instances, the database
Instances, or the
diskgroup, or the failgroup or whatever you want that is generat-
ing most of the I/O reads, most of the I/O writes or most of the IOPS
(reads+writes)
Now we are ready to see some use cases. Keep in mind that the utility is
not limited to those examples as you can aggregate the results following
your needs in a customizable way: Aggregate per ASM Instances, data-
base instances, Diskgroup, Failgroup or a combination of all of them.
USE CASE 1:
Find out the most physical IO consumers through ASM in real time. This is
useful as you dont need to connect to any database instance to get this
info as this is centralized into the ASM instances.
Lets sort frst based on the number of reads per second that way:
./asm_metrics.pl -show=dbinst -sort_feld=reads
4. You can choose on which Diskgroup to collect the metrics thanks to
the -DG= parameter (wildcard % allowed). In case you need to focus on
a particular diskgroup or a subset of them.
5. You can choose on which Failgroup to collect the metrics thanks to the
-FG= parameter (wildcard % allowed). In case you need to focus on a
particular failgroup or a subset of them.
6. You can choose on which Exadata Cells to collect the metrics thanks to
the -IP= parameter (wildcard % allowed). In case you need to focus on
a particular cell or a subset of them.

7. You can aggregate the results on the ASM instances, DB instances,
Diskgroup, Failgroup (or Exadata cells IP) level thanks to the -SHOW=
parameter. Useful to get an overview of what is going on per
ASM Instances, per diskgroup or whatever you want, as this is fully
customizable.
8. You can display the metrics per snapshot, the average metrics value
since the collection began (that is to say since the script has been
launched) or both thanks to the -DISPLAY= parameter. In this way you
can get the metrics per snapshots, since the script has been launched
or both.
ASM_METRICS.PL
UTILITY USE CASES
2/6 Bertrand Drouvot
105 OTech Magazine #3 May 2014
And check its behaviour thanks to the utility.
./asm_metrics.pl -show=dg,inst,fg -dg=BDT_PREF
As you can see data have been read from their preferred read failure
groups. We can also see their respective performance metrics.
USE CASE 3
I want to see the IO distribution on Exadata across the Cells (storage
nodes). For example I want to check that the IO load is well balanced
across all the cells. This is feasible thanks to the show=ip option.
./asm_metrics.pl -show=dbinst,dg,ip -dg=BDT
As you can see the IO load is well balanced across all the cells.
As you can see the USB3CMMO_2 instance is the one that recorded the
most number of reads during the last second. You can also sort based on
the number of writes or IOPS (means reads+writes).
USE CASE 2:
I want to see the ASM preferred read in action for a particular diskgroup
(BDT_PREF for example) and see the IO metrics for the associated
failgroups. I want to see that no reads are done outside the preferred
failgroup.
Lets confgure the ASM preferred read parameters:
SQL> alter system set asm_preferred_read_failure_groups=BDT_PREF.WIN sid=+ASM1;
System altered.
SQL> alter system set asm_preferred_read_failure_groups=BDT_PREF.JMO sid=+ASM2;
System altered.
ASM_METRICS.PL
UTILITY USE CASES
3/6 Bertrand Drouvot
106 OTech Magazine #3 May 2014
You can see for example how the 343 Reads/s that are recorded into the
ASM2 instance are distributed across the database instances.
USE CASE 6
I want to see the IO distribution recorded into the ASM instances for the
database instances linked to the BDT database.
./asm_metrics.pl -show=inst,dbinst -dbinst=%BDT%
I wanted to see for %BDT% instances only.
USE CASE 4
I want to see the IO distribution recorded into the ASM instances.
./asm_metrics.pl -show=inst
As you can see most of the IOPS are recorded into the ASM2 instance
(which means its clients are doing more IOPS than the ASM1 clients).
It also means that the host on which the ASM2 instance is located is the
one that generated most of the IOPS, so it could be useful to know which
host generates most of the IOPS in a RAC confguration
(This is not necessary true with the 12c Flex ASM feature as the Database
Instance could be remote to the ASM instance). Now drill down a step
further with the following use case.
USE CASE 5
I want to see the IO distribution recorded into the ASM instances for each
database instance (which are the clients we talked about into the use
case 4).
./asm_metrics.pl -show=inst,dbinst
ASM_METRICS.PL
UTILITY USE CASES
4/6 Bertrand Drouvot
107 OTech Magazine #3 May 2014
Warning: Regarding the preferred read, in case of Flex ASM then
watch out for unpreferred reads (see http://bdrouvot.wordpress.
com/2013/07/02/fex-asm-12c-12-1-and-extended-rac-be-careful-to-unpre-
ferred-read/)
Now drill down a step further with the following use case.
USE CASE 9
I want to see the IO distribution across the ASM instances, diskgroups
and failgroups.
./asm_metrics.pl -show=fg,inst,dg
That way I can see that all the reads are done from the DATA diskgroup.
USE CASE 7
I want to see the IO distribution over the FAILGROUPS.
./asm_metrics.pl -show=fg
Only the failgroups metrics are reported.
Now drill down a step further with the following use case.
USE CASE 8
I want to see the IO distribution and their associated metrics across the
ASM instances and the failgroups.
./asm_metrics.pl -show=fg,inst
That way you can see the IO distribution between the ASM instances and
the failgroups. Based on the metrics you can also decide if this is neces-
sary (performance reason) and feasible (enough bandwidth) to put the
ASM preferred read feature in place.
ASM_METRICS.PL
UTILITY USE CASES
5/6 Bertrand Drouvot
108 OTech Magazine #3 May 2014
View the average since the collection began (not only the snaps
delta) thanks to the display parameter that way:
./asm_metrics.pl -show=dbinst -sort_feld=iops -display=avg
The output reports the collection begin time:
Conclusion:
Thanks to these use cases, I hope you can see how customizable the util-
ity is and how you could take beneft of it in a day-to-day work with ASM.
The main entry for the tool is located to this blog page: http://bdrouvot.
wordpress.com/asm_metrics_script/ from which youll be able to down-
load the script or copy the source code.
Feel free to download it and to provide any feedback.
USE CASE 10
I want to see the metrics for the disks that belongs to the FRA diskgroup.
./asm_metrics.pl -show=dsk -dg=FRA
Remarks:
1. Into the previous use cases you may have seen rows with blank value
into some felds. For example:
It means that the values have been aggregated for this particular feld(s).
The aggregation depends on what you want to see (the show option).
2. The use cases focused only on snapshots taken during the last second
but you could also:
Takes snapshots of longer period of time thanks to the interval
parameter:
./asm_metrics.pl -interval=10 (for snaps of 10 seconds)
ASM_METRICS.PL
UTILITY USE CASES
6/6 Bertrand Drouvot
109 OTech Magazine #3 May 2014
BUILD A RAC
DATABASE
FOR FREE WITH
VIRTUALBOX
A STEP BY
STEP GUIDE
Christopher Ostrowski
www.avout.com
twitter.com/chrisostrowski
www.facebook.com/
chris.ostrowski.140
www.linkedin.com/in/
ostrowskichris
110 OTech Magazine #3 May 2014
INTRODUCTION
Oracle Corporation has made it incredibly easy to download and use
virtually all of their software oferings via the Oracle Technology Network
website. The availability of both software and documentation makes it
easy for individuals and organizations to test drive Oracle software be-
fore implementing it. For DBAs and developers anxious to learn and use
new languages, development environments and software features, the
fully-functional software is a godsend for those who want to keep their
skills up to date.
Perhaps the only real limitation to this bounty provided by Oracle is hard-
ware. Many of the pieces of software are complex and require signifcant
hardware investments even for just a sandbox environment (i.e. an envi-
ronment that doesnt require sizing to accommodate many users logging
in simultaneously). As an example, a sandbox environment with Oracle
SOA Suite running on top of Oracle Weblogic Server driven by an Oracle
database requires a signifcant amount of RAM just to run. While the siz-
ing of said components can be scaled down, it still requires a machine to
have pretty signifcant resources.
While RAM and disk space costs have dropped signifcantly in the last
couple of years, there is still one area that is very difcult for DBAs to
create their own sandbox environment: Oracle Real Application Clusters
(RAC). Traditionally, the basic requirements for a RAC system involve
two servers with disk storage array connecting the two. While Network
Attached Storage (NAS) systems have dropped in price in the last couple
of years, the cost and installation are still beyond most DBAs who wish to
set up a sandbox environment (as well as the cost of investing in hard-
ware with a singular use).
Two years ago, I set up a goal for myself to learn about RAC and I went
looking for a solution that, in the best scenario, wouldnt cost me any-
thing. There were various resources on the internet with diferent pieces
of information on how to do this this paper is an attempt to show how I
was able to do it for $0 and the things I learned since then that has made
the process of building your own RAC system much easier.
The Pieces Youll Need
Please remember that the software you download from Oracle is for
evaluation purposes only do not use anything you build using these
instructions in a production environment!
First, lets talk hardware. At a minimum, youll need 8GB of RAM on the
server youre planning to build this on. Why 8GB? Youll need 2 virtual
machines and the minimum youll want to create those machines are
with 2GB of RAM. The virtual machine grabs the 2GB of RAM whether
youre actively using it or not (for a DBA analogy, think of the SGA when
an Oracle instance starts up the instance grabs the physical memory
outlined in your init.ora fle and keeps it allocated as long as the instance
BUILD A RAC
DATABASE FOR FREE
1/22 Christopher Ostrowski
111 OTech Magazine #3 May 2014
2. Oracle Grid Software the Oracle Grid software is what communicates
between your servers and what allows the servers to act as a single en-
tity. The grid software can be downloaded from (http://www.oracle.com/
technetwork/products/clusterware/downloads/index.html). As of March
2014, the latest version of the Grid software is 11.2.0.1.0. Download the
Linux x86-64 version. Make sure to also grab the cluvfy utility this will
be used to verify the cluster right before installing.
3. CentOS Release 5.9 64-bit CentOS is a free operating system that
is equivalent (with some very minor exceptions) to Red Hat Enterprise
is running). OK, youre thinking: 2GB+2GB is 4GB why do I need 8GB of
RAM? Its never a good idea to use more than 50% of your physical RAM
for virtual machines. You certainly CAN do it its very possible, however,
that weird things will start to happen if your VMs use more than 50% (es-
pecially if youre using Windows as your host operating system).
Next, disk space. At a minimum, I would allocate 20GB for each virtual
machine (40GB total), and at least 30GB for your shared disks, so youll
need at least 70GB of disk space. As we will see, the virtualization soft-
ware well use is very efcient at using disk space the actual disk space
used at the host operating system level doesnt get allocated to the vir-
tual machine until it is needed but making sure you have at least 70GB
of usable disk space will be the minimum to get started.
Next, the software:
1. Oracle Database 11gR2 (available for download at http://www.oracle.
com/technetwork/database/enterprise-edition/downloads/index.html).
As of March 2014, the latest 11.x version available is 11.2.0.1.0. Download
the two fles that make up the Linux-x86-64 link:
BUILD A RAC
DATABASE FOR FREE
2/22 Christopher Ostrowski
112 OTech Magazine #3 May 2014
tioned before, make sure you have at least 8GB of RAM and 70GB of disk
space on this server. The installation is very straightforward and will not
be covered in detail here.
CentOS
The process were going to use to create our virtual machines are as fol-
lows: well create the frst virtual machine, create shared disks, and then
clone the frst virtual machine. After VirtualBox is installed, run it create
a new virtual machine by clicking on the New icon in the top-left of the
screen. Give your new virtual machine a meaningful name (I called mine
RAC1), select Linux as the type and Red Hat (64-bit) as the version. For
memory size, select 2048MB. Note that this is the minimum if you have
more memory you can use on this server, bump up the memory alloca-
tion accordingly.
Next, select Create a virtual hard drive now, then VDI (VirtualBox Disk
Image) then Dynamically Allocated. Specify a location and make sure
the disk is at least 30GB (again, you can allocate more if you have the
space). I mentioned earlier that the virtualization software were going
to use is very efcient when it comes to disk space. After creating the
virtual machine, we can look at the corresponding fle on our base operat-
ing system and well see that its much less that 30GB is size VirtualBox
will dynamically allocate space as its needed up to 30GB (or more if we
specify more in the wizard).
Linux. You can fnd a public mirror to download CentOS from http://wiki.
centos.org/Download. From there, click on x86_64 link next to CentOS-5
(as of the March 2014, the latest 5.x release is 5.10). Pick a location close
to you, then click the fle named CentOS-5.10-x86_64-bin-DVD-1of2.iso
dont worry if you dont have a DVD burner; were not going to actually
burn the DVD.
4. Oracle VirtualBox (available from http://www.oracle.com/technetwork/
server-storage/virtualbox/downloads/index.html) Oracle VirtualBox is
a free virtualization program from Oracle. It difers from Oracles other
virtualization product (Oracle VM) in the important distinction that it re-
quires an underlying operating system to run on top of. As such, it is not
suitable to most virtualized production environments as all system calls
(disk reads and writes, memory reads and writes, etc.) have to be trans-
lated to the native host operating system. This usually causes enough of
a performance hit that using VirtualBox in production is not acceptable.
For our purposes, however, VirtualBox will do the job.
Believe it or not, thats all the pieces youll need to build your own sand-
box RAC environment.
The Steps
Oracle VirtualBox
First, install Oracle VirtualBox on the machine you wish to use. As men-
BUILD A RAC
DATABASE FOR FREE
3/22 Christopher Ostrowski
113 OTech Magazine #3 May 2014
SELinux set to disabled
Package groups:
o Desktop Environments > GNOME Desktop Environment
o Applications > Editors and Graphical Internet
o Development > Development Libraries and Development Tools
o Servers > Server Confguration Tools
On the networking screen, do NOT choose DHCP the IP addresses need
to remain consistent for your server, so pick an IP address for both eth0
(the public interface) and eth1 (the private interface (interconnect)).
Make sure both addresses are on a diferent subnet. As an example, I
used the following on my system:
IP Address eth0: 192.168.0.101 (public address)
Default Gateway eth0: 192.168.0.1 (public address)
IP Address eth1: 192.168.1.101 (private address)
Default Gateway eth1: none
Upon completion, shut down your server.
After that last page in the wizard, youll see the main VirtualBox page list-
ing the virtual machines that have been created. Before we can start up
our VM, we need to make a few tweaks to the network options for the
VM. Click on the Network link on the right side of the page, then click on
the Adapter 1 tab. Make sure Enable Network Adapter is checked and
Attached to: is set to Bridged Adapter, then click Adapter 2. Make sure
Enable Network Adapter is checked and Attached to: is set to Inter-
nal Network.
Why do we do this? Oracle RAC needs two network cards attached to
each server one to handle communications with the outside world and
one to handle communications between the two servers. This second
connection is referred to as interprocess communication and needs to
be a direct connection between the two servers this is why the second
network adapter for the virtual machine has a connection type of Inter-
nal Network.
Click on OK to close the wizard, then click Start in the top-left of the
VirtualBox Manager window. Since this is the frst time were starting up
the virtual machine, VirtualBox is smart enough to ask where the oper-
ating system disk is. Click the folder icon to the left and fnd where you
saved the CentOS ISO fle (CentOS-5.10-x86_64-bin-DVD-1of2.iso). Contin-
ue through the Oracle Linux 5 installation as you would for a basic server.
It should be a server installation with:
A minimum of 4GB of swap space
Firewall disabled
BUILD A RAC
DATABASE FOR FREE
4/22 Christopher Ostrowski
114 OTech Magazine #3 May 2014
And then attach them to the RAC1 virtual machine:
VBoxManage storageattach RAC1 --storagectl SATA --port 2 --device 0 --type hdd
--medium c:\VMs\shared\asm2.vdi --mtype shareable
VBoxManage storageattach RAC1 --storagectl SATA --port 3 --device 0 --type hdd
--medium c:\VMs\shared\asm3.vdi --mtype shareable
VBoxManage storageattach RAC1 --storagectl SATA --port 4 --device 0 --type hdd
--medium c:\VMs\shared\asm4.vdi --mtype shareable
VBoxManage storageattach RAC1 --storagectl SATA --port 5 --device 0 --type hdd
--medium c:\VMs\shared\asm5.vdi --mtype shareable
Even though were defned the disks as sharable, we still need to issue
the following commands:
VBoxManage modifyhd c:\VMs\shared\asm1.vdi --type shareable
VBoxManage modifyhd c:\VMs\shared\asm2.vdi --type shareable
VBoxManage modifyhd c:\VMs\shared\asm3.vdi --type shareable
VBoxManage modifyhd c:\VMs\shared\asm4.vdi --type shareable
VBoxManage modifyhd c:\VMs\shared\asm5.vdi --type shareable
Create Shared Disks
Heres where we get to use the really cool features of VirtualBox. In Vir-
tualBox, we can create network attached disks just by issuing two com-
mands:
VBoxManage createhd --flename c:\VMs\shared\asm1.vdi --size 10240 --format VDI
--variant Fixed
VBoxManage storageattach RAC1 --storagectl SATA --port 1 --device 0 --type hdd
--medium c:\VMs\shared\asm1.vdi --mtype shareable
The frst command creates a 10GB disk and makes it available to Virtual-
Box. The second command attaches the disk to a specifc virtual machine.
Since we specifed mtype shareable at the end, the disk can be attached
to more than one virtual machine. After we clone RAC1, well attach the
disks to the second virtual machine.
Issue the following commands to create four more attached disks:
VBoxManage createhd --flename c:\VMs\shared\asm2.vdi --size 10240 --format VDI
--variant Fixed
VBoxManage createhd --flename c:\VMs\shared\asm3.vdi --size 10240 --format VDI
--variant Fixed
VBoxManage createhd --flename c:\VMs\shared\asm4.vdi --size 10240 --format VDI
--variant Fixed
VBoxManage createhd --flename c:\VMs\shared\asm5.vdi --size 10240 --format VDI
--variant Fixed
BUILD A RAC
DATABASE FOR FREE
5/22 Christopher Ostrowski
115 OTech Magazine #3 May 2014
Confgure the frst Virtual Machine
Step 1: Create groups
As root:
/usr/sbin/groupadd -g 500 dba
/usr/sbin/groupadd -g 600 oinstall
/usr/sbin/groupadd -g 700 oper
/usr/sbin/groupadd -g 800 asm
cat /etc/group
Step 2: Check that user nobody exists
As root:
grep nobody /etc/passwd

Step 3: Add oracle user
As root:
/usr/sbin/useradd -b /home/local/oracle -d /home/local/oracle -g 500 -m -p oracle -u
500 -s /bin/bash oracle
grep oracle /etc/passwd
/usr/sbin/usermod -g oinstall oracle
/usr/sbin/usermod -a -G dba oracle
/usr/sbin/usermod -a -G oper oracle
/usr/sbin/usermod -a -G asm oracle
id oracle
uid=500(oracle) gid=600(oinstall) groups=600(oinstall),500(dba),700(oper),800(a
sm)
At the virtual machine operating system level, the new disks will be
named:
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde and
/dev/sdf
Start the RAC1 virtual machine and partition the new disks:
# fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305,default 1305):
Using default value 1305
Command (m for help): p
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1305 10482381 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Repeat the process for disks /dev/sdc through /dev/sdf.
BUILD A RAC
DATABASE FOR FREE
6/22 Christopher Ostrowski
116 OTech Magazine #3 May 2014
Step 6: Verify that the following packages exist
64-bit only:
yum install binutils.x86_64 -y
yum install elfutils-libelf.x86_64 -y
yum install elfutils-libelf-devel.x86_64 -y
yum install gcc.x86_64 -y
yum install gcc-c++.x86_64 -y
yum install glibc-common.x86_64 -y
yum install libstdc++-devel.x86_64 -y
yum install make.x86_64 -y
yum install sysstat.x86_64 -y
Both 32 and 64 bit:
yum install compat-libstdc++-33.i386 -y
yum install compat-libstdc++-33.x86_64 -y
yum install glibc.i686 -y
yum install glibc.x86_64 -y
yum install glibc-devel.i386 -y
yum install glibc-devel.x86_64 -y
yum install libaio.i386 -y
yum install libaio.x86_64 -y
yum install libgcc.i386 -y
yum install libgcc.x86_64 -y
yum install libstdc++.i386 -y
yum install libstdc++.x86_64 -y
yum install libaio-devel.x86_64 y
yum install libaio-devel.i386 -y
yum install unixODBC.x86_64 -y
yum install unixODBC.i386 -y
yum install unixODBC-devel.i386 -y
yum install unixODBC-devel.x86_64 y
yum install pdksh.i386 -y

Step 4: Setup directories
As root, create directories for Oracle grid software (must be outside of
Oracles home directory), change ownership and permission levels.
cd /
mkdir oracledb
mkdir oraclegrid
mkdir oraclegridbase
mkdir oraInventory
chown oracle:oinstall oracledb
chown oracle:oinstall oraclegrid
chown oracle:oinstall oraclegridbase
chown oracle:oinstall oraInventory
chmod 777 oracledb
chmod 777 oraclegrid
chmod 777 oraclegridbase
chmod 777 oraInventory
Step 5: Unzip oracle software
As oracle:
[oracle@RAC1 software]$ pwd
/home/local/oracle/software
unzip linux.x64_11gR2_grid.zip
unzip linux.x64_11gR2_database_1of2.zip
unzip linux.x64_11gR2_database_2of2.zip
mkdir cvu
mv cvupack_Linux_x86_64.zip cvu
cd cvu
unzip cvupack_Linux_x86_64.zip
BUILD A RAC
DATABASE FOR FREE
7/22 Christopher Ostrowski
117 OTech Magazine #3 May 2014
Step 9: Set kernel parameters
vi /etc/sysctl.conf
kernel.sem=250 32000 100 142
fs.fle-max=327679
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
net.ipv4.tcp_rmem=4194304 4194304 4194304
net.ipv4.tcp_wmem=262144 262144 262144
vi /etc/security/limits.conf
oracle soft nofle 131072
oracle hard nofle 131072
oracle soft nproc 131072
oracle hard nproc 131072
vi /etc/pam.d/login
session required pam_limits.so
Have system changes take efect:
sysctl -p
Step 10: Confgure hangcheck timer
/sbin/insmod /lib/modules/2.6.18-308.11.1.el5/kernel/drivers/char/hangcheck-timer.ko
hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1
Step 7: Change security level
Disable SELinux
As root:
selinuxenabled && echo enabled || echo disabled
To disable:
echo 0 > /selinux/enforce
Step 8: Check NTP
vi /etc/sysconfg/ntpd
Add -x to end of OPTIONS line (inside of quote marks)
/sbin/service ntpd stop
/sbin/service ntpd start
/usr/sbin/ntpq
ntpq> peers
Make sure at least one entry shows up. If not:
1) copy /etc/ntp.conf from RAC1.
2) /sbin/service ntpd stop
3) /sbin/service ntpd start
4) /usr/sbin/ntpq
5) ntpq> peers
For ntpd reference, see:
http://www.eecis.udel.edu/~mills/ntp/html/ntpd.html
BUILD A RAC
DATABASE FOR FREE
8/22 Christopher Ostrowski
118 OTech Magazine #3 May 2014
# Virtual
192.168.0.171 rac1-vip.localdomain rac1-vip
192.168.0.181 rac2-vip.localdomain rac2-vip
# SCAN
192.168.0.190 rac-scan.localdomain rac-scan
192.168.0.191 rac-scan.localdomain rac-scan
192.168.0.192 rac-scan.localdomain rac-scan
Step 12: Confgure ASM support
Step 12.1: Download 3 fles based on kernel version
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-support-2.1.7-1.el5.x86_64.rpm
Step 12.2: Install ASM RPMs as root
rpm -ivf oracleasm-support-2.1.7-1.el5.x86_64.rpm
rpm -ivf oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5.x86_64.rpm
rpm -ivf oracleasmlib-2.0.4-1.el5.x86_64.rpm
Step 12.3: Check that all were installed successfully
[root@RAC1 software]# rpm -qav | grep oracleasm
oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5
oracleasm-support-2.1.7-1.el5
oracleasmlib-2.0.4-1.el5
Step 12.4: Confgure ASM
Check that at least 1 row is returned:
[root@RAC1 bin]# lsmod | grep -i hang
hangcheck_timer 2526 0
Add command to /etc/rc.d/rc.local:
vi /etc/rc.d/rc.local
/sbin/insmod /lib/modules/2.6.18-308.11.1.el5/kernel/drivers/char/hangcheck-timer.ko
hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

Step 11: Confgure network
Right before we confgured our disks in the Shared Disks section above,
we created the server with the following IP addresses:
Node 1 Public: 192.168.0.101 (bond0)
Node 1 Private: 192.168.1.101 (bond1)
[root@RAC11 ~]# cat /etc/hosts
127.0.0.1 RAC1 RAC1 localhost localhost.localdomain localhost4 localhost4.
localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# Add these lines:
# Public
192.168.0.101 rac1.localdomain rac1
192.168.0.102 rac2.localdomain rac2
# Private
192.168.1.101 rac1-priv.localdomain rac1-priv
192.168.1.102 rac2-priv.localdomain rac2-priv
BUILD A RAC
DATABASE FOR FREE
9/22 Christopher Ostrowski
119 OTech Magazine #3 May 2014
Step 13: Verify Cluster
Step 13.1 Run cluvfy
[oracle@RAC1 bin]$ pwd
/home/local/oracle/software/cvu/bin

[oracle@RAC1 bin]$ ./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba

Verifying system requirement
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for RAC1:/tmp
Check for multiple users with UID value 500 passed
User existence check passed for oracle
Group existence check passed for oinstall
Group existence check passed for dba
Membership check for user oracle in group oinstall [as Primary] passed
Membership check for user oracle in group dba passed
Run level check passed
Hard limits check passed for maximum open fle descriptors
Soft limits check passed for maximum open fle descriptors
Hard limits check passed for maximum user processes
Soft limits check passed for maximum user processes
System architecture check passed
Kernel version check passed
Kernel parameter check passed for semmsl
Kernel parameter check passed for semmns
Kernel parameter check passed for semopm
Kernel parameter check passed for semmni
Kernel parameter check passed for shmmax
Kernel parameter check failed for shmmni
Check failed on nodes:
RAC1
Kernel parameter check passed for shmall
Kernel parameter check failed for fle-max
[root@RAC1 software]# /etc/init.d/oracleasm confgure -i
Confguring the Oracle ASM library driver.
This will confgure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values will
be shown in brackets ([]). Hitting <ENTER> without typing an answer
will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asm
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver confguration: done
Step 12.5: Initialize ASM
[root@RAC1 /]# /etc/init.d/oracleasm stop
Dropping Oracle ASMLib disks: [ OK ]
Shutting down the Oracle ASMLib driver: [ OK ]
[root@RAC1 /]# /etc/init.d/oracleasm start
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@RAC1 /]# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
BUILD A RAC
DATABASE FOR FREE
10/22 Christopher Ostrowski
120 OTech Magazine #3 May 2014
Verifcation of system requirement was unsuccessful on all the specifed nodes.
Step 13.2 Run cluvfy with fxup switch
./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba -fxup fxupdir /home/local/ora-
cle/software/cvu/bin/fxit
Log in as root:
cd /tmp/CVU_11.2.0.3.0_oracle
./runfxup.sh
Log back in as oracle:
su - oracle
Step 13.3 Verify Cluster again
[oracle@RAC1 bin]$ ./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba
Verifying system requirement
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for RAC1:/tmp
Check for multiple users with UID value 500 passed
User existence check passed for oracle
Group existence check passed for oinstall
Group existence check passed for dba
Membership check for user oracle in group oinstall [as Primary] passed
Membership check for user oracle in group dba passed
Run level check passed
Hard limits check passed for maximum open fle descriptors
Soft limits check passed for maximum open fle descriptors
Hard limits check passed for maximum user processes
Check failed on nodes:
RAC1
Kernel parameter check passed for ip_local_port_range
Kernel parameter check passed for rmem_default
Kernel parameter check passed for rmem_max
Kernel parameter check passed for wmem_default
Kernel parameter check failed for wmem_max
Check failed on nodes:
RAC1
Kernel parameter check failed for aio-max-nr
Check failed on nodes:
RAC1
Package existence check passed for make
Package existence check passed for binutils
Package existence check passed for gcc(x86_64)
Package existence check passed for libaio(x86_64)
Package existence check passed for glibc(x86_64)
Package existence check passed for compat-libstdc++-33(x86_64)
Package existence check passed for elfutils-libelf(x86_64)
Package existence check passed for elfutils-libelf-devel
Package existence check passed for glibc-common
Package existence check passed for glibc-devel(x86_64)
Package existence check passed for glibc-headers
Package existence check passed for gcc-c++(x86_64)
Package existence check passed for libaio-devel(x86_64)
Package existence check passed for libgcc(x86_64)
Package existence check passed for libstdc++(x86_64)
Package existence check passed for libstdc++-devel(x86_64)
Package existence check passed for sysstat
Package existence check passed for ksh
Check for multiple users with UID value 0 passed
Starting check for consistency of primary group of root user
Check for consistency of root users primary group passed
Time zone consistency check passed
BUILD A RAC
DATABASE FOR FREE
11/22 Christopher Ostrowski
121 OTech Magazine #3 May 2014
Starting check for consistency of primary group of root user
Check for consistency of root users primary group passed
Time zone consistency check passed
Verifcation of system requirement was successful.
Step 14: Create ASM disks
As root, reset the headers on the disks:
dd if=/dev/zero of=/dev/sdb bs=1024 count=1000
dd if=/dev/zero of=/dev/sdc bs=1024 count=1000
dd if=/dev/zero of=/dev/sdd bs=1024 count=1000
dd if=/dev/zero of=/dev/sde bs=1024 count=1000
dd if=/dev/zero of=/dev/sdf bs=1024 count=1000
Make sure ownership and permissions are correct on all 3 nodes:
[root@RAC1 etc]# ls -ltr /dev/sd*
brw-rw---- 1 oracle oinstall 253, 3 Aug 9 08:03 /dev/sdb
brw-rw---- 1 oracle oinstall 253, 4 Aug 9 08:03 /dev/sdc
brw-rw---- 1 oracle oinstall 253, 5 Aug 9 08:03 /dev/sdd
brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sde
brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sdf
brw-rw---- 1 oracle oinstall 253, 3 Aug 9 08:03 /dev/sdb1
brw-rw---- 1 oracle oinstall 253, 4 Aug 9 08:03 /dev/sdc1
brw-rw---- 1 oracle oinstall 253, 5 Aug 9 08:03 /dev/sdd1
brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sde1
brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sdf1
As root:
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
Soft limits check passed for maximum user processes
System architecture check passed
Kernel version check passed
Kernel parameter check passed for semmsl
Kernel parameter check passed for semmns
Kernel parameter check passed for semopm
Kernel parameter check passed for semmni
Kernel parameter check passed for shmmax
Kernel parameter check passed for shmmni
Kernel parameter check passed for shmall
Kernel parameter check passed for fle-max
Kernel parameter check passed for ip_local_port_range
Kernel parameter check passed for rmem_default
Kernel parameter check passed for rmem_max
Kernel parameter check passed for wmem_default
Kernel parameter check passed for wmem_max
Kernel parameter check passed for aio-max-nr
Package existence check passed for make
Package existence check passed for binutils
Package existence check passed for gcc(x86_64)
Package existence check passed for libaio(x86_64)
Package existence check passed for glibc(x86_64)
Package existence check passed for compat-libstdc++-33(x86_64)
Package existence check passed for elfutils-libelf(x86_64)
Package existence check passed for elfutils-libelf-devel
Package existence check passed for glibc-common
Package existence check passed for glibc-devel(x86_64)
Package existence check passed for glibc-headers
Package existence check passed for gcc-c++(x86_64)
Package existence check passed for libaio-devel(x86_64)
Package existence check passed for libgcc(x86_64)
Package existence check passed for libstdc++(x86_64)
Package existence check passed for libstdc++-devel(x86_64)
Package existence check passed for sysstat
Package existence check passed for ksh
Check for multiple users with UID value 0 passed
BUILD A RAC
DATABASE FOR FREE
12/22 Christopher Ostrowski
122 OTech Magazine #3 May 2014
Clone the RAC.vdi disk:
VBoxManage clonehd c:\VMs\RAC1\RAC.vdi c:\VMs\RAC2\RAC.vdi
Create the RAC2 virtual machine in VirtualBox in the same way as you did
for RAC1, with the exception of using the c:\VMs\RAC2\RAC.vdi virtual
hard drive.
Add the second network adaptor as you did on RAC1. After the VM is cre-
ated, attach the shared disks to RAC2:

VBoxManage storageattach RAC2 --storagectl SATA --port 1 --device 0 --type hdd
--medium c:\VMs\shared\asm1.vdi --mtype shareable
VBoxManage storageattach RAC2 --storagectl SATA --port 2 --device 0 --type hdd
--medium c:\VMs\shared\asm2.vdi --mtype shareable
VBoxManage storageattach RAC2 --storagectl SATA --port 3 --device 0 --type hdd
--medium c:\VMs\shared\asm3.vdi --mtype shareable
VBoxManage storageattach RAC2 --storagectl SATA --port 4 --device 0 --type hdd
--medium c:\VMs\shared\asm4.vdi --mtype shareable
VBoxManage storageattach RAC2 --storagectl SATA --port 5 --device 0 --type hdd
--medium c:\VMs\shared\asm5.vdi --mtype shareable
Start RAC2 by clicking the Start button on the toolbar. Ignore any net-
work errors during the startup.
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data4 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data5 /dev/sdf1
Writing disk header: done
Instantiating disk: done

[root@RAC1 ~]# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
DATA5
Step 15: Clone the VM
Shut down RAC1:
# shutdown -h now
BUILD A RAC
DATABASE FOR FREE
13/22 Christopher Ostrowski
123 OTech Magazine #3 May 2014
ping -c 3 RAC2
ping -c 3 RAC2-priv
On Node 2 as root:
[root@RAC2 CVU_11.2.0.3.0_oracle]# /etc/init.d/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk DATA1
Instantiating disk DATA2
Instantiating disk DATA3
Instantiating disk DATA4
Instantiating disk DATA5
[root@RAC2 CVU_11.2.0.3.0_oracle]# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
DATA5
Install the Oracle Grid software
As the oracle user on node 1 (RAC1):
cd /home/local/oracle/software/grid
./runInstaller
Log in to the RAC2 as root and reconfgure the network settings:
hostname: RAC2
IP Address eth0: 192.168.0.102 (public address)
Default Gateway eth0: 192.168.0.1 (public address)
IP Address eth1: 192.168.1.102 (private address)
Default Gateway eth1: none
Amend the hostname in the /etc/sysconfg/network fle.
NETWORKING=yes
HOSTNAME=RAC2
Remove the current ifcfg-eth0 and ifcfg-eth1 scripts and rename the
original scripts from the backup names:
# cd /etc/sysconfg/network-scripts/
# rm ifcfg-eth0 ifcfg-eth1
# mv ifcfg-eth0.bak ifcfg-eth0
# mv ifcfg-eth1.bak ifcfg-eth1
Edit the /home/oracle/.bash_profle fle and correct the ORACLE_SID
and ORACLE_HOSTNAME values.
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_HOSTNAME=RAC2; export ORACLE_HOSTNAME
Restart RAC2 and start RAC1. When both nodes have started, check they
can both ping all the public and private IP addresses using the following
commands:
ping -c 3 RAC1
ping -c 3 RAC1-priv
BUILD A RAC
DATABASE FOR FREE
14/22 Christopher Ostrowski
124 OTech Magazine #3 May 2014
BUILD A RAC
DATABASE FOR FREE
15/22 Christopher Ostrowski
125 OTech Magazine #3 May 2014
BUILD A RAC
DATABASE FOR FREE
16/22 Christopher Ostrowski
126 OTech Magazine #3 May 2014
BUILD A RAC
DATABASE FOR FREE
17/22 Christopher Ostrowski
127 OTech Magazine #3 May 2014
After installation completes, a confguration fle called root.sh must be
run on all nodes.
If root.sh fails on any node other than the frst one, perform the follow-
ing steps:
On all nodes,

1. Modify the /etc/sysconfg/oracleasm with:

ORACLEASM_SCANORDER=dm
ORACLEASM_SCANEXCLUDE=sd

2. Restart the asmlib (on all nodes except the 1st node):
# /etc/init.d/oracleasm restart

3. De-confgure the root.sh settings on all nodes (except the
1st node):
$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfg -force

4. Run root.sh again on all nodes except the frst
Output of root.sh on node 1:
[root@RAC1 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracleasm/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The fle dbhome already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The fle oraenv already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The fle coraenv already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab fle as needed by
Database Confguration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specifc root actions will be performed.
2012-07-06 00:14:20: Parsing the host name
2012-07-06 00:14:20: Checking for super user privileges
2012-07-06 00:14:20: User has super user privileges
Using confguration parameter fle: /oracleasm/11.2.0/grid/crs/install/crsconfg_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user root, privgrp root..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profle reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
BUILD A RAC
DATABASE FOR FREE
18/22 Christopher Ostrowski
128 OTech Magazine #3 May 2014
Operation successful.
CRS-2672: Attempting to start ora.crsd on RAC1
CRS-2676: Start of ora.crsd on RAC1 succeeded
CRS-4256: Updating the profle
Successful addition of voting disk 4baed8b3ca254f86bf91e6a19ef6aeeb.
Successful addition of voting disk 0e8a2bac79f84fdcbf1a5dcd73fa208e.
Successful addition of voting disk 401dae362bbb4f76bf3bddb8d047a429.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profle
CRS-4266: Voting fle(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4baed8b3ca254f86bf91e6a19ef6aeeb (ORCL:DATA1) [DATA]
2. ONLINE 0e8a2bac79f84fdcbf1a5dcd73fa208e (ORCL:DATA2) [DATA]
3. ONLINE 401dae362bbb4f76bf3bddb8d047a429 (ORCL:DATA3) [DATA]
Located 3 voting disk(s).
CRS-2673: Attempting to stop ora.crsd on RAC1
CRS-2677: Stop of ora.crsd on RAC1 succeeded
CRS-2673: Attempting to stop ora.asm on RAC1
CRS-2677: Stop of ora.asm on RAC1 succeeded
CRS-2673: Attempting to stop ora.ctssd on RAC1
CRS-2677: Stop of ora.ctssd on RAC1 succeeded
CRS-2673: Attempting to stop ora.cssdmonitor on RAC1
CRS-2677: Stop of ora.cssdmonitor on RAC1 succeeded
CRS-2673: Attempting to stop ora.cssd on RAC1
CRS-2677: Stop of ora.cssd on RAC1 succeeded
CRS-2673: Attempting to stop ora.gpnpd on RAC1
CRS-2677: Stop of ora.gpnpd on RAC1 succeeded
CRS-2673: Attempting to stop ora.gipcd on RAC1
CRS-2677: Stop of ora.gipcd on RAC1 succeeded
CRS-2673: Attempting to stop ora.mdnsd on RAC1
CRS-2677: Stop of ora.mdnsd on RAC1 succeeded
CRS-2672: Attempting to start ora.mdnsd on RAC1
CRS-2676: Start of ora.mdnsd on RAC1 succeeded
CRS-2672: Attempting to start ora.gipcd on RAC1
CRS-2676: Start of ora.gipcd on RAC1 succeeded
CRS-2672: Attempting to start ora.gpnpd on RAC1
pa cert request
peer cert
pa cert
peer root cert TP
profle reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profle reader pa cert TP
profle reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start ora.gipcd on RAC1
CRS-2672: Attempting to start ora.mdnsd on RAC1
CRS-2676: Start of ora.gipcd on RAC1 succeeded
CRS-2676: Start of ora.mdnsd on RAC1 succeeded
CRS-2672: Attempting to start ora.gpnpd on RAC1
CRS-2676: Start of ora.gpnpd on RAC1 succeeded
CRS-2672: Attempting to start ora.cssdmonitor on RAC1
CRS-2676: Start of ora.cssdmonitor on RAC1 succeeded
CRS-2672: Attempting to start ora.cssd on RAC1
CRS-2672: Attempting to start ora.diskmon on RAC1
CRS-2676: Start of ora.diskmon on RAC1 succeeded
CRS-2676: Start of ora.cssd on RAC1 succeeded
CRS-2672: Attempting to start ora.ctssd on RAC1
CRS-2676: Start of ora.ctssd on RAC1 succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specifed
Successfully accumulated necessary OCR keys.
Creating OCR keys for user root, privgrp root..
BUILD A RAC
DATABASE FOR FREE
19/22 Christopher Ostrowski
129 OTech Magazine #3 May 2014
and the error message tells you to search the /oraInventory/logs/
installActions<date>.log fle and you fnd an error similar to:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name CLUSTER2
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for CLUSTER2 (IP address:
10.230.100.82) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name CLUSTER2
see:
http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-
on-ol5-using-vmware-server-2.php
CRS-2676: Start of ora.gpnpd on RAC1 succeeded
CRS-2672: Attempting to start ora.cssdmonitor on RAC1
CRS-2676: Start of ora.cssdmonitor on RAC1 succeeded
CRS-2672: Attempting to start ora.cssd on RAC1
CRS-2672: Attempting to start ora.diskmon on RAC1
CRS-2676: Start of ora.diskmon on RAC1 succeeded
CRS-2676: Start of ora.cssd on RAC1 succeeded
CRS-2672: Attempting to start ora.ctssd on RAC1
CRS-2676: Start of ora.ctssd on RAC1 succeeded
CRS-2672: Attempting to start ora.asm on RAC1
CRS-2676: Start of ora.asm on RAC1 succeeded
CRS-2672: Attempting to start ora.crsd on RAC1
CRS-2676: Start of ora.crsd on RAC1 succeeded
CRS-2672: Attempting to start ora.evmd on RAC1
CRS-2676: Start of ora.evmd on RAC1 succeeded
CRS-2672: Attempting to start ora.asm on RAC1
CRS-2676: Start of ora.asm on RAC1 succeeded
CRS-2672: Attempting to start ora.DATA.dg on RAC1
CRS-2676: Start of ora.DATA.dg on RAC1 succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 131071 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oraInventory
UpdateNodeList was successful.
If, after running the root.sh script on all nodes, you encounter an error in
Grid installation program similar to:
BUILD A RAC
DATABASE FOR FREE
20/22 Christopher Ostrowski
130 OTech Magazine #3 May 2014
In the database confguration assistant, if you select a RAC installation,
the DBCA will automatically select both nodes. On this screen of the
wizard, ASM is chosen automatically, since RAC was specifed. Note that
the ASM fle group must exist before running the database confguration
wizard:
This screen prompts for the Flash Recovery Area. By default the ASM
group is selected:
Install the Oracle Database
The installation of the Oracle database is the same as non-RAC instance,
with a few exceptions. The screens that are unique to RAC are listed
below:
On this screen, you are prompted to enter the nodes in the cluster that
the database software will be made aware of:
BUILD A RAC
DATABASE FOR FREE
21/22 Christopher Ostrowski
131 OTech Magazine #3 May 2014
Conclusion
For no money (except for memory and disk requirements) you can build
a fully-functional RAC system using Oracles state-of-the-art software.
The sandbox environment is fully functional and can be used to test
Oracle software and learn the ins and outs of Oracles premier database
product.
BUILD A RAC
DATABASE FOR FREE
22/22 Christopher Ostrowski
132 OTech Magazine #3 May 2014
DINOSAURS
IN SPACE -
MOBILIZING
ORACLE FORMS
APPLICATIONS
Mia Urman
www.auraplayer.com
twitter.com/miaurman
www.facebook.com/auraplayer
il.linkedin.com/in/miaurman
133 OTech Magazine #3 May 2014
With more than 93% of the current population in possession of a mobile
device, we must ask, why cant we access our Oracle Forms based sys-
tems from our mobile devices? Companies operating Oracle Forms based
systems face a seemingly insurmountable challenge. As their legacy
technologies become mature, they seek for ways to export the business
logic trapped within their Forms systems and leverage them in modern
technologies such as Webservices, mobile or cloud based environments.
Oracle Forms systems are mainly applications in maintenance mode,
most were developed over a decade ago and as such, many lack docu-
mentation and the original developers are usually unavailable. Reverse
engineering the system, even if it could be accomplished, would take
years and millions of dollars. Additionally, most Forms based systems are
mission critical enterprise systems that allow for little or no downtime,
making QA of these processes even more daunting.
Translating Desktop to Mobile - The Mobility Challenge
The challenges of going mobile are not exclusive to Oracle Forms sys-
tems. The translation of a system from a desktop environment to a mo-
bile device is a difcult challenge to overcome. The nature of tasks per-
formed on a mobile device are diferent than tasks done on a desktop, so
its always important to remember to focus on the process you want to
run on the mobile and not simply copy the existing desktop form. Since
the user interface of a mobile device is restricted in size and the keyboard
overlaps half the screen, managing real estate is a real challenge. As
such, you need to look with an editing eye to remove Form felds that do
not directly relate to the task at hand. We also must take into considera-
tion that the use of a mouse and keyboard are all but eliminated from a
mobile device as almost all interactions are done with a touch of a fnger.
This forces us to reexamine how the user does navigation and we must
fgure out how to most efectively move the user through the process on
the mobile based system. As typing on a mobile device can be cumber-
some, we also need to fnd ways to reduce the number of plain text input
felds and replace them with clickable lists. These and other diferences
between mobile and desktop applications must all be considered in order
successfully migrate an existing system to mobile.
The Business Need
Matrix, an Oracle Forms development house, decided to face their Oracle
Forms to mobile challenge head-on. They sought to mobilize the surgi-
cal scheduling module of its Tafnit ERP application for medical centers as
the frst module in their overall mobility strategy. Tafnit is used to sched-
ule over 500 diferent types of procedures for over 2,000,000 patients
each year. Due to the existing constraints of Oracle Forms, accessing this
system was impossible from outside the walls of the hospital. Surgeons
were especially afected by this lack of mobility since to retrieve sched-
ules or make adjustments to surgeries, doctors would have to call into a
telemarketing system or receive a faxed copy of their schedule at home.
Matrix recognized this inefciency but they needed a solution that didnt
require them to reinvent the wheel or break their bank. They found a so-
lution in a 2-phase modernization project: First, the Oracle Forms system
DINOSAURS IN SPACE - MOBILIZING
ORACLE FORMS APPLICATIONS
1/4 Mia Urman
134 OTech Magazine #3 May 2014
The Webservice Creation Process
Figure 1: The AuraPlayer Recording Toolbar
To begin, the designated business scenario was recorded in the Oracle
forms system using the AuraPlayer Recording Toolbar (a similar process
to recording a macro). After the recording was completed, it was auto-
matically deployed as a Webservice using the AuraPlayer Service Manag-
er. The process resulted in generated REST /JSON and SOAP Webservices
along with the details of what input and output parameters are part of
the service. Once the Webservice generation was complete, Matrix was
ready to begin developing the user interface. Although they could have
was upgraded to the web-enabled Oracle Fusion Middleware 11g. Then,
the system was modernized to run on all mobile devices using a combina-
tion of the Oracle ADF Mobile development framework and AuraPlayer, a
unique Oracle Forms Webservice generation technology.
Designing For Mobility
The frst step of the mobility redesign process was defning the specifc
business processes needed to run on the mobile device. Identifying the
user actions along with the input data needed to run the business pro-
cess and what should be the results returned were all integral parts of
this frst design phase. They had to consider not only what would be
performed in the system, but what were the possible expected results
as well. Understanding the systems success and failure messages and
being able to react accordingly in the new system, were critical to enable
the mobile system to behave consistently. Once the business process has
been selected, they used AuraPlayer to wrap the Oracle Forms business
process as a Webservice.
DINOSAURS IN SPACE - MOBILIZING
ORACLE FORMS APPLICATIONS
2/4 Mia Urman
135 OTech Magazine #3 May 2014
data if needed and allow synchronization of changes back to the original
Forms systems once the connection had been restored.
Figure 3: Jdeveloper Webservice DataControl development Wizard
Designing Your Mobile User Experience
The UI generation wizard was then used to create the ADF Mobile AMX
pages and the navigation fows between the pages. The wizard built de-
fault pages and the navigation task fow, allowing Matrix to only concen-
trate on extending this application to include native device features like
Calendar, location based services and phone capabilities.
chosen any web technology that can visualize a Webservice to develop
the user interface, they chose Oracle ADF Mobile due to the fexibility of
coding in Java and the ability to deploy to both Android and iOS with one
code base.
Figure 2 : Automatically Generating Webservice in the AuraPlayer Service Manager
Defning Your Data Structures and Ofine Capabilities
Using wizard based development in Jdeveloper, the ADF Mobile Data-
Controls were created based on the existing AuraPlayer Webservices.
This gave Matrix the basis for binding the ADF Mobile page items to the
Oracle Forms values returned from the Webservice. In addition, the Jde-
veloper wizard developed a localized database that would reside on the
mobile device. This would allow the application to work ofine on cached
DINOSAURS IN SPACE - MOBILIZING
ORACLE FORMS APPLICATIONS
3/4 Mia Urman
136 OTech Magazine #3 May 2014
ofering was available enabling doctors to access surgery scheduling data
and patient information from anywhere at any time.

Figure 5: NewMobile Surgery scheduling app
Figure 6: The Application Task Flowdiagram
Figure 4: Jdeveloper UI generation Wizard
The beauty of ADF Mobile is that Matrix was able to develop an app using
a simple declarative wizard simply by copying the URL of the Webservices
received from the AuraPlayer into Jdeveloper. Using the combination
of AuraPlayer and ADF Mobile allowed Matrix to extend their existing
system to new environments. In the modernized system, Matrix only
maintains one source of business logic in the Oracle Forms system for the
2 UIs: a Java applet-based UI for the back-end users and a lightweight
Oracle ADF Mobile app based on a Forms Webservice for on-the-go users.
In coupling AuraPlayer with ADF Mobile, Matrix was able to implement
its mobile app in a matter of days without any costly redevelopment or
migration of the underlying system. Now that the Oracle Forms system
modernization is complete, Matrix has a single management system with
diferent user interfaces, both web-based and mobile, that access the
same core system. Within less than a week of development a new mobile
DINOSAURS IN SPACE - MOBILIZING
ORACLE FORMS APPLICATIONS
4/4 Mia Urman
137 OTech Magazine #3 May 2014
PROVISIONING
FUSION MIDDLE-
WARE USING
CHEF AND
PUPPET | PART I
Ronald van Luttikhuizen
Simon Haslam
www.vennster.nl
www.veriton.com
twitter.com/rluttikhuizen
nl.linkedin.com/in/soaarchitecture
twitter.com/simon_haslam
www.facebook.com/thozturk
uk.linkedin.com/in/simonhaslam
138 OTech Magazine #3 May 2014
There are several ways to speed up installation and confguration of
Oracle Fusion Middleware products and being able to spend more of your
time on creating valuable solutions for the business using these products.
Common approaches involve a public or private cloud: you can move your
infrastructure and middleware to a cloud provider or use of-the-shelf
and preconfgured appliances such as Oracle Exalogic, Oracle Exadata or
the O-box SOA Appliance to create your on-site private cloud. A third ap-
proach, in case you insist on installing and confguring middleware your-
self, is to automate the provisioning process instead of performing this
manually. The infrastructure on which you provision can still be on prem-
ise as well as in the cloud. Currently, Chef and Puppet are the most popu-
lar, general purpose confguration management tools out there. These
tools are very well suited for automated provisioning of middleware.
This two part article explains the position of middleware provisioning in
the entire process of software delivery, explains the advantages of auto-
mated provisioning compared to other approaches, introduces Chef and
Puppet, and indicates how you can get started with Oracle Middleware
provisioning using these tools.
Note that Oracle also provides its own, specialized tooling for server and
middleware provisioning such as Oracle Virtual Assembly Builder (OVAB).
We wont be covering these tools in this article. You can fnd more infor-
mation on OVAB on the Oracle Technology Network (OTN) website.
Software Delivery and Middleware Provisioning
The entire process of delivering functional software on infrastructure
consists of a number of steps that are shown in the following picture:

As a frst step you will need machines on which you can install middle-
ware, packaged apps, and/or custom software: either physical machines
or virtual machines, either on premise or in the cloud. Cloud providers
and server virtualization products such as Oracle VM, VMware, and Ora-
cle VirtualBox provide features to automate the provisioning of OS-ready
images. Since you dont want to administer and frequently update doz-
ens of diferent server images with new patches, versions, and so on, the
number of diferent images is usually little and are very basic. In other
words, they only contain a small set of necessary packages.
As a next step we want to rollout non-functional software such as pack-
ages and middleware on these basic images and keep these confgura-
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
1/6 Ronald van Luttikhuizen & Simon Haslam
139 OTech Magazine #3 May 2014
components, SOA composites, BPMN processes, etc.). You can use spe-
cialized deployment tools to manage dependencies and defne rollback
scenarios.
In general, as software delivery becomes more and more automated, the
tools that come along with it are better integrated too. An example is the
integration between Vagrant and Chef or Puppet where you can initiate
the creation of a virtual machine based on a confguration fle that will
automatically trigger server provisioning in one go.
Automating the software delivery process improves time-to-market and
quality of new software and increases predictability. It is relatively easy to
create a business case to demonstrate the added value of this, compared
to a software delivery process with lots of manual steps in it. Depending
on the state and level of automation of the various parts of the software
delivery process, you can choose which aspect to deal with frst when you
want to improve the delivery process. You dont need to eat the entire
software delivery elephant in one go.
History of Middleware Provisioning
Lets take a trip down memory lane. In the early days of middleware pro-
visioning we would install and confgure all middleware ourselves, manu-
ally. This way of working is called artisan server crafting and is manage-
able as long as the number of servers is low. It is comparable to when
you get your new laptop and spend a couple of days to get it the way you
tions up to date with the latest security patches, new versions, and so on.
Confguration management tools such as Chef and Puppet are designed
to apply such changes to machine confguration frequently in a controlled
fashion.
These frst two steps are usually in the domain of IT Operations and to-
gether result in fully confgured servers on which functional software can
then be deployed.
Next step is to build, test, and package your (custom) software such as
applications, services, and business processes as part of the software de-
velopment process. When this process is automated and performed in a
high frequency we use the term Continuous Integration. To facilitate this
we use yet another set of tools: build servers such as Hudson or Jenkins,
build and packaging tools such as Maven or Ant, version control systems
such as Git or Subversion, test tools such as JUnit and FitNesse, and re-
positories to store our packaged software such as Nexus or Artifactory.
Finally we need to deploy our applications, services, and business pro-
cesses from our artifact repository to our middleware provisioned
servers, which is called continuous deployment when performed in an
automated and repetitive fashion. While you can use the same tools for
deployment as described for middleware provisioning and continuous
integration, this can become complex if there are several dependencies
and ordering between the deployable artifacts (database scripts, Java
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
2/6 Ronald van Luttikhuizen & Simon Haslam
140 OTech Magazine #3 May 2014
To this end, standard confguration management tools emerged that sup-
ported most operating systems, have a uniform way to describe a wide
variety of server confgurations, are able to provision servers, and contin-
uously keep them in check. Automated confguration management tools
often use a Domain-Specifc Language (DSL) to describe a server confgu-
ration independent of the underlying machine, provider and operating
system. You describe the desired state once and can apply it as often as
you want.
Today, Chef and Puppet are among the most popular of these general-
purpose confguration management tools and have a big community
base. When you hire new people for your DevOps team, chances are
good they know Chef and Puppet.
Advantages of using confguration management tools
So you know about the evolution from artisan server crafting to the use
of standard confguration management tools. More precisely, these tools
ofer the following concrete benefts:
Automate the repetitive installation and confguration tasks, so you
can focus on improvements and real problems;
Predictable results through automation;
(Near) real-time provisioning: automated provisioning is fast;
Keep servers in sync: a confguration management tool checks the cur-
rent state of a server periodically and applies changes needed to bring
it (back) to the desired state;
want. This is doable for one laptop, but becomes pretty boring when you
need to manage dozens of laptops.

No matter how good we are in our profession, from time to time we will
make some errors. Especially when the middleware installation and con-
fguration is complicated and involves a great deal of steps. Also, you will
notice that after some period of time the state of servers isnt the same
across all servers. Manual changes were applied to some of the servers
which were forgotten to be applied on other servers. In the end, manag-
ing a lot of (identical) servers manually is boring, stressful, error-prone
and time consuming. And thereby non-scalable and very expensive.
To cope with the growing number of servers and complexity of the instal-
lations we created our own proprietary scripts and frameworks to auto-
mate certain aspects of the installation for certain products on certain
operating systems. While this improved the quality and timeliness of the
installation, the scripts were often proprietary. New employees needed
time to learn these scripts, the scripts couldnt easily be used by other
organizations and individuals, and the scripts were only applicable for a
certain type of middleware product and version for a specifc operating
system.
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
3/6 Ronald van Luttikhuizen & Simon Haslam
141 OTech Magazine #3 May 2014
The following snippet shows part of a manifest that will install and confg-
ure Apache HTTP Server when applied to a node:
package { httpd:
name => httpd.x86_64,
ensure => present,
}
fle { http.conf:
path => /etc/httpd/conf/httpd.conf,
owner => root,
group => root,
mode => 0644,
source => puppet:///modules/apache/httpd.conf,
require => Package[httpd],
}
service { httpd:
ensure => running,
enable => true,
subscribe => File[http.conf],
}
The manifest contains three resource declarations: package, fle, and
service. Every resource declaration has a name to uniquely identify it and
a set of name/value pairs to instruct Puppet how the resource should be
confgured. In the above example:
the package http.x86_64 needs be installed;
the confguration fle http.conf must be present, should have certain
rights and should be identical to a http.conf fle that is centrally stored
by Puppet;
the service httpd should be running.
Notice that you can use attributes such as require and specify to
The confguration is human readable, thereby serves as documenta-
tion, and is always up-to-date;
Confguration is version controlled like any other software artefact;
Confgurations can be changed, tested and reapplied in DTAP environ-
ments just like other software.
The term infrastructure as code is often used to describe this new way
of working and the associated benefts with automated server provision-
ing. In this respect the worlds of development and IT operations, that
used to be clearly separated from each other, become more and more
integrated to work together.
Introduction to Puppet
Puppet is an open-source confguration management tool from Puppet
Labs written in Ruby. It is shipped as freely available open-source vari-
ant or as a commercially supported Enterprise Edition that comes with
additional features such as a management console, support, Role-Based
Access Control, and so on. Puppet Labs was founded in 2005.
Puppet describes the desired state of a server in so-called manifests.
Manifest fles declare resources such as fles, packages, and services us-
ing predefned attributes in a Domain Specifc Language. Manifests are
applied by Puppet to nodes; the term Puppet uses to denote machines
that need to be provisioned. Puppet manifests are stored in fles with a
.pp extension.
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
4/6 Ronald van Luttikhuizen & Simon Haslam
142 OTech Magazine #3 May 2014
can create your own modules or reuse existing ones that are published by
Puppet Labs or the community on the Puppet Forge: https://forge.pup-
petlabs.com. For example have a look at the Oracle modules on Puppet
Forge by Edwin Biemond at https://forge.puppetlabs.com/biemond.
Not all your nodes will have the same confguration. You can use node
declarations in Puppet to indicate what classes and resources should be
applied on what nodes.
node www1.example.com {
include common
include apache
}
node db1.example.com {
include common
include oraclexe
}
There are mainly two ways of letting Puppet do its magic at runtime. The
frst approach (left hand side of the following fgure) is by triggering Pup-
pet runs on nodes manually or periodically using some scheduler such as
Cron. As part of a Puppet run, Puppet will compile the manifests that it
needs to apply on that node into a node-specifc catalog. It will then in-
spect the state of the node and determine what the deviation is with the
desired state as described in the manifests. If there is a deviation, Puppet
will apply the catalog so the result is a node that is in the desired state.

indicate a certain order in which the resources are applied, or triggered
by Puppet. If not specifed, Puppet will determine the order in which re-
sources are applied itself. Also, resources with the same name are applied
once per node and not as often as you declare them. Resources are only
applied by Puppet when the target state of a node does not match the
resource declarations for that node.
You can learn more about built-in resource types in Puppet such as fle
and user in the Puppet Type Reference at http://docs.puppetlabs.com/
references/latest/type.html. Besides these out-of-the-box resource types
you can also defne your own resource types to (re)use in your manifests.
Parts of manifests can be dependent on the specifc state of a node such
as the operating system, amount of memory, and so on. An example is
the package name that can difer between various Linux distributions; for
example ntp versus ntpd. Puppet uses so-called facts that you can
use to retrieve information on the node you are operating on. You can
use both out-of-the-box facts and defne your own custom facts.
if $operatingsystem == CentOS
Several resource declarations can be bundled into coarser grained units
called classes. Manifests, together with accompanying fles, classes, tem-
plates, etc. are packaged into modules. Modules are autonomous build-
ing blocks that provide a certain functionality. An example would be an
Oracle XE module which has all the necessary code and confguration (ex-
cept the actual binaries) to install Oracle XE on one or more nodes. You
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
5/6 Ronald van Luttikhuizen & Simon Haslam
143 OTech Magazine #3 May 2014
price is lowered stepwise when the total number of nodes increases. For
information on pricing, reference customers, and specifc diferences
between the open-source and Enterprise Edition of Puppet see http://
puppetlabs.com/puppet/puppet-enterprise.
This concludes the frst part of this article. In the second part of this
article we will introduce another popular confguration management tool
called Chef, discuss how tools like Puppet and Chef can help you in the
provisioning of Oracle Fusion Middleware, and provide conclusions
The right-hand side depicts the Puppet client/server model in which the
central Puppet Master plays an important role in the coordination of
the provisioning process. Every node that needs to be provisioned has
a Puppet Agent installed. The Agent will periodically ask the Master for
the catalog to be applied. The Master authenticates the Agent and puts
together a compiled catalog for that Agent. After the catalog has been
applied by the Agent, the Agent will report back to the Master with a
status update of the Puppet run. Status updates, run information, node
declarations and son can be inspected in the Puppet Enterprise Console.
The Puppet Enterprise Edition is free up to ten managed nodes. Above
that number of nodes you pay an amount per node, in which the node
PROVISIONING FUSION MIDDLEWARE
USING CHEF AND PUPPET | PART I
6/6 Ronald van Luttikhuizen & Simon Haslam
144 OTech Magazine #3 May 2014
MOBILITY FOR
WEBCENTER
CONTENT
Troy Allen
www.tekstream.com
twitter.com/troy_allen_TS
www.linkedin.com/pub/
troy-allen/3/b17/46b
145 OTech Magazine #3 May 2014
With the release of Oracles WebCenter Con-
tent (WCC) 11.1.1.8 (SP7), Oracle also released
a mobile app for WCC for Android and IOS. As
most initial software releases, the mobile appli-
cation was a framework of some functionality
and held great promise for what will be coming
in future releases. After a few updates to the
mobile app, Oracle has flled out some of the
key functions that make it a viable tool for con-
tent on the run, but there is more work to be
done. Overall, it is a sound tool and I use it as a
quick way to keep in touch with my documents,
but I want more out of it. This article will take a
look at what is working well, what needs some
help, and what needs to be added to turn the
WCC mobile app into a great application I cant
do without.
The mobile application (latest released version
at the time of this article is 11.1.1.8.1.1) is geared
towards standard document management
functions including: Search, Libraries, Viewing,
Downloads, Check-in, workfow, and Sharing.
Search
For those who are familiar with the extensive
search capabilities of WCC server, the mobile
version may seem like a letdown. That was my
frst reaction, but as I started using the tool, I
realized that I didnt need more bells and whis-
tles to fnd my documents and get work done.
Searching with the WCC Mobile app is very
much like using the Quick Search from the
primary web-based user interface. The search
feld performs a query against the Title, Com-
ments, Full-Text, and a few other metadata
felds. Once a Result is found, the user is pre-
sented with a result set.
MOBILITY FOR
WEBCENTER CONTENT
1/5 Troy Allen
146 OTech Magazine #3 May 2014
information to the folder and its content. This
functionality allows for dragging and drop-
ping content into the system without having
to worry about how it will be tagged, streamlin-
ing the check-in processes.
The mobile app also takes advantage of Li-
braries and Folders by providing users with an
intuitive navigation path to fnding content that
they need. Users can tap on the Library, tap on
the Info icon to see metadata about the Library
or folder, and can sort the Libraries to their de-
sired view (by ascending or descending order).
Once a user has navigated into a Library, they
also have the ability to add new folders by tap-
ping the plus sign in the lower portion of the
screen. In testing this functionality, I expected
to be able to also create a new library, but this
functionality is restricted to the ADF (Applica-
tion Development Framework) version of the
WCC user interface. Despite the inability to cre-
ate libraries, enabling users to create their own
folders allows for the sharing of information
across multiple platforms more intuitive and
relatively easy.
FrameWork folders to help people navigate to
what is important (Libraries and FrameWork
Folders were introduced in Oracle WCC 11.1.1.8).
Libraries
WCCs Libraries are designed to provide a logi-
cal categorization of content within the reposi-
tory. Within the Libraries, folder structures
continue the categorization and are often used
to automatically assign metadata and security
The mobile version allows users to flter their
search result by Content Type or Security
Group. For power users of the WCC primary
web-based interface used to profling searches
and search Query Builder form, may be a bit
frustrated at frst with the mobile application,
but this tool is intended for general users who
need to fnd their content quickly, view it, and
make decisions. For that very reason, Oracle
maintains the newly introduced Libraries and
MOBILITY FOR
WEBCENTER CONTENT
2/5 Troy Allen
147 OTech Magazine #3 May 2014
Downloads
As with viewing, the ability to download a doc-
ument through the mobile application is limited
to the Native Version of the fle. I use a number
of applications on my mobile devices to cre-
ate slide shows, edit images or videos, take
notes, create documents, or to share content
through cloud-based services. Oracles mobile
In WCC, original fles that are uploaded are
stored in their original fle format as the Na-
tive Format. In most cases, WCC is also au-
tomatically creating a Web Viewable version,
typically in PDF format. Many organizations us-
ing WCC are adding additional functionality to
their PDF versions such as Watermarking them,
or applying dynamic information that appears
on the PDF such as who downloaded it, who
the author was, the date it was downloaded,
some kind of watermark, or other pertinent
metadata that should be viewed as part of the
PDF. In the mobile app, this functionality is not
available. While being able to see the Native
Version rendered on my mobile device is nice,
there may be important information in the Web
Viewable version that I need as well. Im hop-
ing that Oracle will embrace its native ability
to perform document conversions and make
them part of the mobile application for viewing
in future releases.
Viewing
Viewing anything from Word documents to im-
ages is easy with the WCC mobile application.
From either a search results or from a folder,
simply tab the title of the document and the ap-
plication will render the content for viewing. It
is important to note that the fle that is getting
rendered is the Native Version.
MOBILITY FOR
WEBCENTER CONTENT
3/5 Troy Allen
148 OTech Magazine #3 May 2014
Check-In
The WCC mobile application allows users to
check-in content to folders. However, this is
limited to taking a picture or video at the time
of check-in, looking for a picture or video from
your devices camera directory, or uploading
a fle that you had previously downloaded
through the mobile application. Using tools
like Notability and selecting to share or open
in does not provide the Oracle application as
an option. While a lot of apps do not register
themselves to the mobile operating system as a
valid candidate for sharing or for being a target
to open documents to, the WCC mobile solu-
tion would be greatly improved with this fea-
ture and enable true intra-application sharing
of content. Like many of you, I have diferent
apps that I use based on the type of content
Im working on, having the ability to save fles
from any application into WCC from my mobile
device would be a huge win
app uses a proprietary storage location for
the fles downloaded through the application,
which makes it difcult to fnd them and use
them with other tools. I must, in all fairness,
also note that this is not an unusual practice for
applications working with content on both IOS
and Android devices. However, the user does
have the option of opening the document or
fle in other tools by tapping the up-arrow box
at the bottom of the screen. The Native Ver-
sion will be transferred to the application you
choose and you can work with it as a one-of
document (any changes made to the document
in this fashion are not uploaded to the WCC
Repository).
Even with the ability to open the fle in other
applications, I still have a general frustration
with inter-application communication as not all
applications lend themselves to Open In reg-
istration. This, again, is not unique to Oracles
mobile solution, but a common issue with many
productivity based mobile applications.
MOBILITY FOR
WEBCENTER CONTENT
4/5 Troy Allen
149 OTech Magazine #3 May 2014
uct would be to allow users to choose what
version and/or rendition of the fle or image
they want to share with people (much like the
Desktop Integration Suite tool for Oracle WCC
does in Microsoft Outlook).
Final Thoughts
If I had to rate the Oracle WCC mobile applica-
tion on a scale from 1 to 10 (with 10 being the
gold standard for mobile applications), it would
receive a solid 7. The latest release has provid-
ed some much needed functionality over the
products debut release, but there is still work
to be done. I like the interface and its intuitive
design and am happy with the functions that
have been included in the current release of
the app. However, not granting access to ren-
ditions and the Web Viewable versions of con-
tent, any actual workfow actions, and lack of
inter-application support has cost the solution
a few points. I think this is a tool that could
easily be rated a solid 9 if just a few items were
addressed. I realize that Oracle will continue to
make improvements to the application, and I
cant wait to see what they are.
the WCC repository. Being able to view a docu-
ment or image and approve it from my phone
or tablet is a huge timesaver for me. Like me,
many business travelers are constantly moving
from location to location and meeting to meet-
ing and opening up a laptop or navigating to a
website takes time just to approve a document
that could quickly be done through a mobile
app. This is defnitely one feature that would
make the Oracle mobile application a huge suc-
cess and Im looking forward to future versions
where this might be supported.
Sharing
Their sharing features of the application are
what Id expected in most mobile solutions.
Users can choose to attach the fle or image
from WCC to an email or they can send a link.
Unfortunately, the fle that gets attached to
the email or the URL for the link is to the Native
File. Oracle WCC server is designed to provide
multiple versions of managed documents and
comes with full set Digital Asset Management
capabilities that just arent being utilized in the
mobile application. Another win for the prod-
Workfow
While the mobile app does provide users with
the ability to see content that is in their work-
fow queue, I was surprised that the solution
did not allow for the approval or acceptance of
the workfowed item. I am constantly on the
road traveling for work and I receive a fair num-
ber of items that need to be approved within
MOBILITY FOR
WEBCENTER CONTENT
5/5 Troy Allen
150 OTech Magazine #3 May 2014
ANALYTIC
WAREHOUSE
PICKING
Kim Berg Hansen
www.thansen.dk
twitter.com/kibeha
www.linkedin.com/in/kibeha
Blog: http://dspsd.blogsport.com
151 OTech Magazine #3 May 2014
Analytic functions rock, analytic functions roll, as Tom Kyte usually says.
I couldnt agree more, I use them all the time and cannot imagine living
without analytic functions. They make it possible to do work in SQL that
otherwise would have required slow procedural handing, because they
allow the developer to access data across rows, not just within rows.
A great example is picking goods in a warehouse.
Let us suppose we trade beer. Whenever we buy a batch of beer the pal-
let is placed in a location in an aisle in one of our two warehouses:
When we receive an order for beer, we need to create a picking list telling
the operator at which locations he must go and pick beer and how much
each place.
That could be done procedurally by looping over the order lines, for each
line fnding locations for that item, deciding which locations to pick from
if there are multiple places, and output the results in a suitable order for
the operator to work from.
But as we shall see, it can also be done in a single SQL statement
The data
Inventory table contains how many of each item is at each location in the
warehouse and what date that batch was purchased.
1 create table inventory (
2 item varchar2(10) -- identifcation of the item
3 , loc varchar2(10) -- identifcation of the location
4 , qty number -- quantity present at that location
5 , purch date -- date that quantity was purchased
6 );
7
8 insert into inventory values(Ale , 1-A-20, 18, DATE 2014-02-01);
9 insert into inventory values(Ale , 1-A-31, 12, DATE 2014-02-05);
10 insert into inventory values(Ale , 1-C-05, 18, DATE 2014-02-03);
11 insert into inventory values(Ale , 2-A-02, 24, DATE 2014-02-02);
12 insert into inventory values(Ale , 2-D-07, 9, DATE 2014-02-04);
13 insert into inventory values(Bock, 1-A-02, 18, DATE 2014-02-06);
14 insert into inventory values(Bock, 1-B-11, 4, DATE 2014-02-05);
15 insert into inventory values(Bock, 1-C-04, 12, DATE 2014-02-03);
16 insert into inventory values(Bock, 1-B-15, 2, DATE 2014-02-02);
17 insert into inventory values(Bock, 2-D-23, 1, DATE 2014-02-04);
18 commit;
Orderline table contains how many of each item is to be picked for each
order.
1 create table orderline (
2 ordno number -- id-number of the order
3 , item varchar2(10) -- identifcation of the item
ANALYTIC
WAREHOUSE PICKING
1/12 Kim Berg Hansen
152 OTech Magazine #3 May 2014
Bock 18 2-D-23 2014-02-04 1
Bock 18 1-B-11 2014-02-05 4
Bock 18 1-A-02 2014-02-06 18
Visually we see what we should pick frst 18 of the oldest Ale, then 6
from the next oldest, and similar for the Bock. Now how to do this in
SQL?
We add analytic rolling sum to the select list:

6 , sum(i.qty) over (
7 partition by i.item
8 order by i.purch, i.loc
9 rows between unbounded preceding and current row
10 ) sum_qty

ITEM ORD_QTY LOC PURCH LOC_QTY SUM_QTY
----- ------- ------- ---------- ------- -------
Ale 24 1-A-20 2014-02-01 18 18
Ale 24 2-A-02 2014-02-02 24 42
Ale 24 1-C-05 2014-02-03 18 60
Ale 24 2-D-07 2014-02-04 9 69
Ale 24 1-A-31 2014-02-05 12 81
Bock 18 1-B-15 2014-02-02 2 2
Bock 18 1-C-04 2014-02-03 12 14
Bock 18 2-D-23 2014-02-04 1 15
Bock 18 1-B-11 2014-02-05 4 19
Bock 18 1-A-02 2014-02-06 18 37
This ROWS BETWEEN clause makes SUM_QTY a cumulative sum. We see
that the frst 18 is less than 24 so it is not enough, but 42 is sufcient so
we need no more. The problem is to make a where clause that includes
4 , qty number -- quantity ordered
5 );
6
7 insert into orderline values (42, Ale , 24);
8 insert into orderline values (42, Bock, 18);
9 commit;
FIFO picking
We wish our operator to pick the beers for order number 42. To avoid at
some point having a lot of old beer in the warehouse, the beers should be
picked in order of purchase date the First-In-First-Out or FIFO principle.
We join orderlines with inventory and order each item by purchase date:
1 select o.item
2 , o.qty ord_qty
3 , i.loc
4 , i.purch
5 , i.qty loc_qty
6 from orderline o
7 join inventory i
8 on i.item = o.item
9 where o.ordno = 42
10 order by o.item, i.purch, i.loc;
ITEM ORD_QTY LOC PURCH LOC_QTY
----- ------- ------- ---------- -------
Ale 24 1-A-20 2014-02-01 18
Ale 24 2-A-02 2014-02-02 24
Ale 24 1-C-05 2014-02-03 18
Ale 24 2-D-07 2014-02-04 9
Ale 24 1-A-31 2014-02-05 12
Bock 18 1-B-15 2014-02-02 2
Bock 18 1-C-04 2014-02-03 12
ANALYTIC
WAREHOUSE PICKING
2/12 Kim Berg Hansen
153 OTech Magazine #3 May 2014
19 ) s
20 where s.sum_prv_qty < s.ord_qty
21 order by s.item, s.purch, s.loc;
ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY PICK_QTY
----- ------- ------- ---------- ------- ----------- --------
Ale 24 1-A-20 2014-02-01 18 0 18
Ale 24 2-A-02 2014-02-02 24 18 6
Bock 18 1-B-15 2014-02-02 2 0 2
Bock 18 1-C-04 2014-02-03 12 2 12
Bock 18 2-D-23 2014-02-04 1 14 1
Bock 18 1-B-11 2014-02-05 4 15 3
The least of the location quantity and whats left to pick is what we need
to pick at that location.
So we can now simplify and get a picking list in location order for our
picking operator:
1 select s.loc
2 , s.item
3 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
4 from (

19 ) s
20 where s.sum_prv_qty < s.ord_qty
21 order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-20 Ale 18
1-B-11 Bock 3
1-B-15 Bock 2
1-C-04 Bock 12
2-A-02 Ale 6
both 18 and 42.
We can solve it by a small change to the ROWS BETWEEN clause:

6 , sum(i.qty) over (
7 partition by i.item
8 order by i.purch, i.loc
9 rows between unbounded preceding and 1 preceding
10 ) sum_prv_qty
ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY
----- ------- ------- ---------- ------- -----------
Ale 24 1-A-20 2014-02-01 18
Ale 24 2-A-02 2014-02-02 24 18
Ale 24 1-C-05 2014-02-03 18 42
Ale 24 2-D-07 2014-02-04 9 60
Ale 24 1-A-31 2014-02-05 12 69
Bock 18 1-B-15 2014-02-02 2
Bock 18 1-C-04 2014-02-03 12 2
Bock 18 2-D-23 2014-02-04 1 14
Bock 18 1-B-11 2014-02-05 4 15
Bock 18 1-A-02 2014-02-06 18 19
Now SUM_PRV_QTY is cumulative sum of all previous rows. When all pre-
vious rows have picked at least the ordered quantity, we can stop.
So we put NVL(,0) around our analytic sum to avoid NULL problems
and then we can flter on all rows where the previous rows have not
picked everything yet:
1 select s.*
2 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
3 from (

ANALYTIC
WAREHOUSE PICKING
3/12 Kim Berg Hansen
154 OTech Magazine #3 May 2014
------- ----- --------
1-A-02 Bock 18
2-A-02 Ale 24
Or principle of cleaning out small quantities frst for efcient space man-
agement:

12 order by i.qty, i.loc

LOC ITEM PICK_QTY
------- ----- --------
1-A-20 Ale 3
1-A-31 Ale 12
1-B-11 Bock 4
1-B-15 Bock 2
1-C-04 Bock 11
2-D-07 Ale 9
2-D-23 Bock 1
The last output (principle of clean out small quantity frst) is a nice long
picking list let us examine this in more detail:
2-D-23 Bock 1
The picking list shows these locations the operator needs to visit and
how much to pick each location:
Switch picking strategies
The SQL now has two ORDER BY clauses one within the analytic func-
tion defnes picking strategy (FIFO), the other at the end of the SQL
defnes picking route.
We can then change picking strategy simply by changing the analytic or-
der by. For example if we were trading items with no Best before date,
we could exchange First-In-First-Out principle with a principle of least
number of picks for fast picking:

12 order by i.qty desc, i.loc

LOC ITEM PICK_QTY
ANALYTIC
WAREHOUSE PICKING
4/12 Kim Berg Hansen
155 OTech Magazine #3 May 2014
16 rows between unbounded preceding and 1 preceding
17 ),0) sum_prv_qty
18 from orderline o
19 join inventory i
20 on i.item = o.item
21 where o.ordno = 42
22 ) s
23 where s.sum_prv_qty < s.ord_qty
24 order by s.loc;
WAREHOUSE AISLE POSITION LOC ITEM PICK_QTY
--------- ----- -------- ------- ----- --------
1 A 20 1-A-20 Ale 3
1 A 31 1-A-31 Ale 12
1 B 11 1-B-11 Bock 4
1 B 15 1-B-15 Bock 2
1 C 4 1-C-04 Bock 11
2 D 7 2-D-07 Ale 9
2 D 23 2-D-23 Bock 1
Using DENSE_RANK we can number each visited aisle consecutively:
1 select to_number(substr(s.loc,1,1)) warehouse
2 , substr(s.loc,3,1) aisle
3 , dense_rank() over (
4 order by to_number(substr(s.loc,1,1)) -- warehouse
5 , substr(s.loc,3,1) -- aisle
6 ) aisle_no
7 , to_number(substr(s.loc,5,2)) position
8 , s.loc
9 , s.item
10 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
11 from (
WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
We can see our picking operator needs to double back on himself a few
times if he picks in the order we output the data not very efcient. Lets
improve that.
Picking route
We would like the picking list to be ordered so that the operator goes
up in the frst aisle he visits and down in the next aisle and then up
again and so on.
As the inner analytic ORDER BY handles picking strategy, we can improve
the picking route by changing the outer ORDER BY. First we need to split
the location into warehouse, aisle and position.
(In real life that might be columns by themselves in a location table or we
might need more complex regexp_substr, but here we can use simple
substr.)
1 select to_number(substr(s.loc,1,1)) warehouse
2 , substr(s.loc,3,1) aisle
3 , to_number(substr(s.loc,5,2)) position
4 , s.loc
5 , s.item
6 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
7 from (
8 select o.item
9 , o.qty ord_qty
10 , i.loc
11 , i.purch
12 , i.qty loc_qty
13 , nvl(sum(i.qty) over (
14 partition by i.item
15 order by i.qty, i.loc -- small qty frst principle
ANALYTIC
WAREHOUSE PICKING
5/12 Kim Berg Hansen
156 OTech Magazine #3 May 2014
This is a much better picking route for our operator alternately up
and down:
But what if the two warehouses only had one connecting door? We can
solve that easily by changing the DENSE_RANK to PARTITION by ware-
house:
6 , dense_rank() over (
7 partition by to_number(substr(s.loc,1,1)) -- warehouse
8 order by substr(s.loc,3,1) -- aisle
9 ) aisle_no

WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 15 1-B-15 Bock 2
1 B 2 11 1-B-11 Bock 4
1 C 3 4 1-C-04 Bock 11
2 D 1 7 2-D-07 Ale 9
2 D 1 23 2-D-23 Bock 1
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 11 1-B-11 Bock 4
1 B 2 15 1-B-15 Bock 2
1 C 3 4 1-C-04 Bock 11
2 D 4 7 2-D-07 Ale 9
2 D 4 23 2-D-23 Bock 1
We wrap the entire SQL in an inline view and order the result by ware-
house and aisle and then odd aisles ascending and even aisles de-
scending:
1 select s2.warehouse, s2.aisle, s2.aisle_no, s2.position
2 , s2.loc, s2.item, s2.pick_qty
3 from (

26 ) s2
27 order by s2.warehouse
28 , s2.aisle_no
29 , case
30 when mod(s2.aisle_no,2) = 1 then s2.position
31 else -s2.position
32 end;
WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 15 1-B-15 Bock 2
1 B 2 11 1-B-11 Bock 4
1 C 3 4 1-C-04 Bock 11
2 D 4 23 2-D-23 Bock 1
2 D 4 7 2-D-07 Ale 9
ANALYTIC
WAREHOUSE PICKING
6/12 Kim Berg Hansen
157 OTech Magazine #3 May 2014
We can do FIFO batch picking on the total quantities by simply grouping
by item. The easy way is to use a WITH subquery for the grouped data
and then simply replacing orderlines table with the orderbatch subquery
in the FIFO query:
1 with orderbatch as (
2 select o.item
3 , sum(o.qty) qty
4 from orderline o
5 where o.ordno in (51, 62, 73)
6 group by o.item
7 )
8 select s.loc
9 , s.item
10 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
11 from (
12 select o.item
13 , o.qty ord_qty
14 , i.loc
15 , i.purch
16 , i.qty loc_qty
17 , nvl(sum(i.qty) over (
18 partition by i.item
19 order by i.purch, i.loc -- FIFO
20 rows between unbounded preceding and 1 preceding
21 ),0) sum_prv_qty
22 from orderbatch o
23 join inventory i
24 on i.item = o.item
25 ) s
26 where s.sum_prv_qty < s.ord_qty
27 order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-02 Bock 5
In this output we restart the AISLE_NO sequence for each warehouse,
so aisle D in warehouse 2 becomes an odd aisle and thus ordered as-
cending:
Batch pick multiple orders
So far weve picked just one order now lets try multiple orders.
We get rid of our order 42 from before and replace with three other
orders:
1 delete orderline;
2 insert into orderline values (51, Ale , 24);
3 insert into orderline values (51, Bock, 18);
4 insert into orderline values (62, Ale , 8);
5 insert into orderline values (73, Ale , 16);
6 insert into orderline values (73, Bock, 6);
7 commit;
ANALYTIC
WAREHOUSE PICKING
7/12 Kim Berg Hansen
158 OTech Magazine #3 May 2014
LOC ITEM PICK_QTY FROM_QTY TO_QTY
------- ----- -------- -------- ------
1-A-20 Ale 18 1 18
2-A-02 Ale 24 19 42
1-C-05 Ale 6 43 48
1-B-15 Bock 2 1 2
1-C-04 Bock 12 3 14
2-D-23 Bock 1 15 15
1-B-11 Bock 4 16 19
1-A-02 Bock 5 20 24
The output shows quantity intervals the 24 Ale we pick at 2-A-02 is num-
ber 19-42 out of the total 48 Ale we are picking. Similarly for orderlines we
create quantity intervals:
1 select o.ordno, o.item, o.qty
2 , nvl(sum(o.qty) over (
3 partition by o.item
4 order by o.ordno
5 rows between unbounded preceding and 1 preceding
6 ),0) + 1 from_qty
7 , nvl(sum(o.qty) over (
8 partition by o.item
9 order by o.ordno
10 rows between unbounded preceding and current row
11 ),0) to_qty
12 from orderline o
13 where ordno in (51, 62, 73)
14 order by o.item, o.ordno;
ORDNO ITEM QTY FROM_QTY TO_QTY
----- ----- ---- -------- ------
51 Ale 24 1 24
62 Ale 8 25 32
73 Ale 16 33 48
1-A-20 Ale 18
1-B-11 Bock 4
1-B-15 Bock 2
1-C-04 Bock 12
1-C-05 Ale 6
2-A-02 Ale 24
2-D-23 Bock 1
Result is OK, but we cannot tell how much of each pick goes to each or-
der. So let us add SUM_QTY besides SUM_PRV_QTY and calculate from/
to qty:
1 with orderbatch as (

6 )
7 select s.loc, s.item
8 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
9 , sum_prv_qty + 1 from_qty
10 , least(sum_qty, ord_qty) to_qty
11 from (
12 select o.item, o.qty ord_qty

19 , nvl(sum(i.qty) over (
20 partition by i.item
21 order by i.purch, i.loc
22 rows between unbounded preceding and current row
23 ),0) sum_qty
24 from orderbatch o
25 join inventory i
26 on i.item = o.item
27 ) s
28 where s.sum_prv_qty < s.ord_qty
29 order by s.item, s.purch, s.loc;
ANALYTIC
WAREHOUSE PICKING
8/12 Kim Berg Hansen
159 OTech Magazine #3 May 2014
25 select o.item, o.qty ord_qty
26 , i.loc, i.purch, i.qty loc_qty
27 , nvl(sum(i.qty) over (
28 partition by i.item
29 order by i.purch, i.loc
30 rows between unbounded preceding and 1 preceding
31 ),0) sum_prv_qty
32 , nvl(sum(i.qty) over (
33 partition by i.item
34 order by i.purch, i.loc
35 rows between unbounded preceding and current row
36 ),0) sum_qty
37 from orderbatch o
38 join inventory i
39 on i.item = o.item
40 ) s
41 where s.sum_prv_qty < s.ord_qty
42 )
43 select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty
44 , o.ordno, o.qty, o.from_qty, o.to_qty
45 from ffo f
46 join orderlines o
47 on o.item = f.item
48 and o.to_qty >= f.from_qty
49 and o.from_qty <= f.to_qty
50 order by f.item, f.purch, o.ordno;
LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY
------- ----- ---------- -------- -------- ------ ----- ---- -------- ------
1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24
2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32
2-A-02 Ale 2014-02-02 24 19 42 73 16 33 48
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48
1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18
51 Bock 18 1 18
73 Bock 6 19 24
The 8 Ale from order 62 is number 25-32 out of the total 48 Ale.
Now we can join on overlapping quantity intervals:
ORDERLINES subquery creates the quantity interval for the orderlines.
ORDERBATCH then sums quantities by item to be batch picked in FIFO
subquery. FIFO subquery is joined to ORDERLINES on overlapping inter-
vals.
1 with orderlines as (
2 select o.ordno, o.item, o.qty
3 , nvl(sum(o.qty) over (
4 partition by o.item
5 order by o.ordno
6 rows between unbounded preceding and 1 preceding
7 ),0) + 1 from_qty
8 , nvl(sum(o.qty) over (
9 partition by o.item
10 order by o.ordno
11 rows between unbounded preceding and current row
12 ),0) to_qty
13 from orderline o
14 where ordno in (51, 62, 73)
15 ), orderbatch as (
16 select o.item, sum(o.qty) qty
17 from orderlines o
18 group by o.item
19 ), ffo as (
20 select s.loc, s.item, s.purch
21 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
22 , sum_prv_qty + 1 from_qty
23 , least(sum_qty, ord_qty) to_qty
24 from (
ANALYTIC
WAREHOUSE PICKING
9/12 Kim Berg Hansen
160 OTech Magazine #3 May 2014
51 join orderlines o
52 on o.item = f.item
53 and o.to_qty >= f.from_qty
54 and o.from_qty <= f.to_qty
55 order by f.item, f.purch, o.ordno;
LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY PICK_ORD_QTY
------- ----- ---------- -------- -------- ------ ----- ---- -------- ------ ------------
1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24 18
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24 6
2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32 8
2-A-02 Ale 2014-02-02 24 19 42 73 16 25 48 10
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48 6
1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18 2
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18 12
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18 1
1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18 3
1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24 1
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24 5
The 24 Ale we noticed before is picked from location 2-A-02 and split with
6 to order 51, 8 to order 62 and 10 to order 73.
So we clean up the code and leave the columns the picking operator
needs and order by location:
1 with orderlines as (

15 ), orderbatch as (

19 ), ffo as (

41 )
42 select f.loc, f.item, f.pick_qty pick_at_loc, o.ordno
43 , least(
44 f.loc_qty
1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18
1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24
Notice the pick of 24 Ale at 2-A-02 is joined to all three orders. Those 24
are number 19 to 42 of the total, which overlaps with all three intervals
for the orders.
By using LEAST and GREATEST we calculate how much to pick from each
location to each order. We need to pick the smallest of either the quan-
tity on the location or how much the two intervals overlap:
1 with orderlines as (

15 ), orderbatch as (

19 ), ffo as (
20 select s.loc, s.item, s.purch, s.loc_qty

24 from (
25 select o.item, o.qty ord_qty
26 , i.loc, i.purch, i.qty loc_qty

40 ) s
41 where s.sum_prv_qty < s.ord_qty
42 )
43 select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty
44 , o.ordno, o.qty, o.from_qty, o.to_qty
45 , least(
46 f.loc_qty
47 , least(o.to_qty, f.to_qty)
48 - greatest(o.from_qty, f.from_qty) + 1
49 ) pick_ord_qty
50 from ffo f
ANALYTIC
WAREHOUSE PICKING
10/12 Kim Berg Hansen
161 OTech Magazine #3 May 2014

19 ), ffo as (

41 ), pick as (
42 select to_number(substr(f.loc,1,1)) warehouse
43 , substr(f.loc,3,1) aisle
44 , dense_rank() over (
45 order by
46 to_number(substr(f.loc,1,1)), -- warehouse
47 substr(f.loc,3,1) -- aisle
48 ) aisle_no
49 , to_number(substr(f.loc,5,2)) position
50 , f.loc, f.item, f.pick_qty pick_at_loc, o.ordno
51 , least(
52 f.loc_qty
53 , least(o.to_qty, f.to_qty)
54 - greatest(o.from_qty, f.from_qty) + 1
55 ) qty_for_ord
56 from ffo f
57 join orderlines o
58 on o.item = f.item
59 and o.to_qty >= f.from_qty
60 and o.from_qty <= f.to_qty
61 )
62 select p.loc, p.item, p.pick_at_loc, p.ordno, p.qty_for_ord
63 from pick p
64 order by p.warehouse
65 , p.aisle_no
66 , case
67 when mod(p.aisle_no,2) = 1 then p.position
68 else -p.position
69 end;
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD
------- ----- ----------- ----- -----------
1-A-02 Bock 5 73 5
1-A-20 Ale 18 51 18
45 , least(o.to_qty, f.to_qty)
46 - greatest(o.from_qty, f.from_qty) + 1
47 ) qty_for_ord
48 from ffo f
49 join orderlines o
50 on o.item = f.item
51 and o.to_qty >= f.from_qty
52 and o.from_qty <= f.to_qty
53 order by f.loc, o.ordno;
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD
------- ----- ----------- ----- -----------
1-A-02 Bock 5 73 5
1-A-20 Ale 18 51 18
1-B-11 Bock 4 51 3
1-B-11 Bock 4 73 1
1-B-15 Bock 2 51 2
1-C-04 Bock 12 51 12
1-C-05 Ale 6 73 6
2-A-02 Ale 24 51 6
2-A-02 Ale 24 62 8
2-A-02 Ale 24 73 10
2-D-23 Bock 1 51 1
So we have a FIFO picking list for multiple orders all we now need is to
give the operator the better picking route.
Multiple orders with picking route
Finally we can combine this batch multi-order FIFO picking with the ef-
fcient route calculation going ascending/descending in the aisles:
1 with orderlines as (

15 ), orderbatch as (
ANALYTIC
WAREHOUSE PICKING
11/12 Kim Berg Hansen
162 OTech Magazine #3 May 2014
Just do it
Ive walked through this step by step to demonstrate how I develop
SQL step-wise with analytic funtions. Once you start using this more and
more often, you will get the hang of thinking about it whenever your task
requires comparing or summing data across rows. Youll discover many
of your tasks proftably can use analytics to avoid procedural row-by-row
code (either PL/SQL or client side) and become much more efcient.
Your boss will love you for utilizing the power in the Oracle database he
has paid dearly for. He will save money when your code does not need
bigger application servers. And your users will love you for being able to
work faster without having to wait for the system. And you will love your-
self every time you make an awesome piece of analytic SQL.
The complete script used for this article can be found here:
http://goo.gl/XvgEBd
1-B-15 Bock 2 51 2
1-B-11 Bock 4 51 3
1-B-11 Bock 4 73 1
1-C-04 Bock 12 51 12
1-C-05 Ale 6 73 6
2-A-02 Ale 24 51 6
2-A-02 Ale 24 73 10
2-A-02 Ale 24 62 8
2-D-23 Bock 1 51 1
So using analytic functions we ended up with a single SQL statement that
efciently batch-picks multiple orders by First-In-First-Out principle in an
optimal picking route.
ANALYTIC
WAREHOUSE PICKING
12/12 Kim Berg Hansen
163 OTech Magazine #3 May 2014
WHAT DOES
ADAPTIVE IN
ORACLE ACM
MEAN?
Lonneke Dikmans
www.vennster.nl
twitter.com/lonnekedikmans
nl.linkedin.com/in/
serviceorientedarchitecture
164 OTech Magazine #3 May 2014
In the previous article in O-Tech magazine you
learned how the case component fts in the
BPM Suite and Oracle Fusion Middleware, and
when to use BPMN 2.0 versus case manage-
ment. In this article we look into the adap-
tive part of ACM. The defnition and diferent
aspects of adaptation are discussed and the
Oracle BPM Suite and Oracle SOA Suite are
evaluated against these aspects. The example
we use is the example that is shipped with the
pre-built virtual machine 11.1.1.7 [1]: the EURent
example.
What do we mean by Adaptive?
There have been heated debates over the def-
nition of Adaptive Case Management in the last
couple of years. On top of that, people have
suggested other terms as well: Production Case
Management, Dynamic Case Management and
Advanced Case Management (see references).
According to Merriam-Webster [1], adaptation
is:
1: the act or process of adapting : the state of
being adapted <his ingenious adaptation
of the electric cautery knife tosurgery
George Blumer>
2: adjustment to environmental conditions: as
a : adjustment of a sense organ to the intensity
or quality of stimulation
b : modifcation of an organism or its parts that
makes it more ft for existence under the
conditions of its environmentcompare
ADJUSTMENT 1b
Basically, adaptation is about changing some-
thing so it fts your needs better or to protect
yourself from change in your environment.
When it comes to adaptive case management,
a number of aspects are important:
1. Who makes the change [2]? The business
analyst? The developer? An administrator? The
knowledge/case worker? The system [4]?
2. How long does it take to make a change?
Going through an analysis-design-develop-test-
deploy cycle is less adaptive than changing
something while executing a case [2], [3].
3. What type of change do I need? A new activ-
ity? A new rule? A new plan? A new milestone?
4. Goal of the change. What is the goal? To
improve the quality of the process? To improve
the efciency of the process? To minimize risk
of failure? Or to minimize the impact of failure?
To react to a change in the environment?
Elements we want to change
A case consists of number of elements [5]:
Case File. The case fle represents case infor-
mation. It consists of items that can be any
type of data structure.
Role. Caseworkers or a team of caseworkers
that are authorized to execute tasks or raise
events are represented by a role.
Input and output parameters.
Case Plan Model. The case plan model con-
tains both the initial plan and all the elements
that support further evolution of the plan
through run-time planning by case work-
ers. The plan consists of stages, milestones,
tasks, and event listeners. There are rules
and entry and exit criteria associated with
these elements.
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
1/10 Lonneke Dikmans
165 OTech Magazine #3 May 2014
Unstructured data
Because the case component points to a folder
in your document management system (DMS)
or enterprise content management (ECM) and
the data is unstructured, these can be easily
changed to whatever your DMS or ECM sup-
ports. In the screenshot below you see an
example of an upload screen to upload docu-
ments to the case. This data is stored in UCM.
Illustration 3. User interface that exposes the API to add documents or other fle types to
the case fle [1]
Changing the Case File
The case fle can be easily adapted as far as the
unstructured data is concerned. This can be
done on the fy, as long as the DMS or ECM sup-
ports the data structure (for example movies,
audio, etc.). However, adding new structured
]
Illustration 1. JDeveloper screen to add structured case data and point to ECM for
unstructured data
Structured data
The structured data are used as input for activi-
ties, as facts in the rules and as output of the
case. Input for activities is either case data or
user input that can be saved as case data. You
can create your own screens for data entry,
based on predefned structured data elements.
Illustration 2. Structured data is shown to the user from the EURent example [1]
The possibilities the Oracle BPM Suite ofers to
change these elements difer per element. The
end user can make some change on a case-by-
case basis at runtime; the administrator can
make changes that apply to all cases, the devel-
oper sometimes needs to be involved to make
other changes.
Case File
The case fle consists of both structured data
and unstructured data. In the Oracle Case
component the structured data that is stored in
the BPM database, and unstructured case data
is stored in a Document Management System
(DMS) or Enterprise Content Management
System (ECM) that you integrate with the case
component. When designing the case in JDe-
veloper you point to the folder in your content
management system.
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
2/10 Lonneke Dikmans
166 OTech Magazine #3 May 2014
Illustration 4. Claim check applied: using keys to point to objects that are needed in the GUI
but not in the case
ness rules. Other data that you need for the
execution of an activity can be fetched in the
user interface, based on a key that is part of the
structured case data. This ofers separation of
concern: when you want to change something
in your case, you only need to change the def-
nition in the case, when you want to change
something in the GUI you only need to change
something in the GUI.
data elements to the case is something that
needs to be done by a developer. The table
above summarizes who can do what type
of change and what the scope of the change
entails.
Because users often want to change the data
they see on the screen as part of the execution
of the task it is a good idea to limit the struc-
tured case data to data you need for your busi-
What Add new (struc-
tured) data type
Edit struc-
tured data
Add new unstruc-
tured data type
Edit unstructured data
Scope All cases Instance Case instance Case instance
Who Developer Case worker Case worker Case worker
Timing Design time Runtime Runtime Runtime
Goal Management
information, au-
diting purposes,
business rules are
based on the con-
tent of the data.
Edit data that
is relevant to
the case
Add new docu-
ment types to a
case
Build the content of the case.
This can be for auditing pur-
poses, for communication pur-
poses or other reasons.
Tool JDeveloper User inter-
face
User interface
(note that the
ECM should sup-
port the docu-
ment type)
Case GUI (out of the box or
custom made)
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
3/10 Lonneke Dikmans
167 OTech Magazine #3 May 2014
Changing Roles
According to the standard, a case has one or more case roles. These are called stakeholders in the
case component of the Oracle BPM Suite. You can assign application roles, process roles, users or
groups to a stakeholder in a case.
Who plays what role can change for two reasons:
1. When a process is changed, it changes the way people work. You might be in production and de-
cide you want people with diferent skill sets to execute a task.
2. Diferent departments might assign diferent people to activities, based on the population, size etc.
Guideline: Limit the use of structured data in the case
Statement Structured data is used for business rules about milestones, stages and starting or
ending activities. All other data, for example that is needed execute a task should be
defned in the business applications.
Rationale Structured data need to be changed by the developer. The data that the user want
to see in the screens changes the most. Often this has nothing to do with the case
progression.
Data can be manipulated in the context of diferent case types, think for example
about customer data that can be manipulated in a customer case and in a permit
case. The data in the case becomes out of date if it is stored in diferent cases.
Often organizations have COTS applications that keep track of the structured data
(CRM for example)
Implica-
tions
Use claim check pattern in case activities
The data services need to take auditing requirements into account
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
4/10 Lonneke Dikmans
168 OTech Magazine #3 May 2014
For these two reasons it is a good idea to use application or process roles, rather than users
and groups in the task defnitions. This will make sure your process can adapt to changes you
make in your organization structure or to changes you want to make in the process in terms of
who does what.
Guideline: Use application roles or process roles in your case
Statement Assign application roles or process roles to stakeholders in your case, not
users and groups
Rationale In the LDAP store, groups are often organized according to hierarchical
structure of the organization. Diferent departments might divide the
work diferently.
If reorganization takes place, the only thing that has to be changed is the
groups or users that are assigned to the role. This can be done at run-
time.
Implications Always use process roles or application roles in the case
Assign the users and groups in the EM
Design time
JDeveloper allows you to add and edit stakeholders. Once this is deployed, these stakeholders
can be used in all subsequent new cases that are started.
Runtime
The case component of the BPM Suite ofers an API to the case. One of the methods of the
ICaseInstanceService is addStakeholder. This means that if you expose this to the user, they
can add Stakeholders at runtime as well, to a specifc case. This means that these stakehold-
ers wont be part of the regular case execution, only of the running instance you added the
stakeholder to.
Illustration 5. JDeveloper screen to add and edit stakeholders (case roles) in a case
Illustration 6 Example of howto expose the adding stakeholders to the case worker
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
5/10 Lonneke Dikmans
169 OTech Magazine #3 May 2014
Apart from adding stakeholders to running case instances, administrators can assign groups
and users to application roles or process roles using the tools from SOA Suite (Enterprise Man-
ager) and the Administration panel in the BPM Workspace.
Note that this only works if the developer assigned application roles or process roles to the
stakeholders in the case!
This can be changed at runtime. It does not require a redeploy of the application. This will then
be applied to all cases (both running and new cases), to which it applies.
Aspect Add new stakeholder to a case Assign people to an activity
Scope All cases One instance All cases One instance
Who Developer Case worker,
process owner,
etc.
Developer, Adminis-
trator, Process own-
er, team manager
Case worker
Timing Design time Runtime Design- and runtime Runtime
Goal New partici-
pants because
the scope of the
case is expanded
(include custom-
ers e.g.)
Add a new
stakeholder to
one particular
instance
Reorganization,
Holiday, Process im-
provement
Transfer a task
to someone
else
Tool JDeveloper User interface JDeveloper, BAM,
SOA Composer, EM,
BPM Workspace
BPM work-
space
Prerequisite None Case API is ex-
posed in GUI
Use application roles
or process roles as
members of the
stakeholder
Expose reas-
sign API in GUI
Illustration 7. Administrative interface from BPM Workspace to assign members to roles
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
6/10 Lonneke Dikmans
170 OTech Magazine #3 May 2014
Changing input and output parameters
The input parameters are mainly used for the case fle and as facts to defne rules, as we saw
before. The output parameters are mainly used for management information.
The input and output parameters can be defned in JDeveloper at Design time. Input and out-
put parameter structures cant be changed at runtime.
Obviously, the content of the input and output parameters can be changed at runtime. The
input parameters are set when the case is started, and the output parameters are either deter-
mined by the user or by the system based on the business rules that you have defned.
Aspect Input Output
Scope All cases All cases
Who Developer Developer
Timing Design time Design time
Goal Add new data to create rules Measure more fne grained outcomes
Tool JDeveloper JDeveloper
Changing the case plan model
A case consists of two distinct phases: design time phase and a run time phase. Activities that
are executed can be so called plan items or discretionary items. The caseworker decides at run
time whether and when the discretionary items will be executed [5].
Illustration 8 JDeveloper screen to defne the outcome
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
7/10 Lonneke Dikmans
171 OTech Magazine #3 May 2014
Defning new elements
The Oracle case component does not support
stages. These can be emulated using mile-
stones.
New milestones, events and activities for a case
model plan can be defned at design time in
JDeveloper.
Illustration 10. Adding or editing milestones in JDeveloper
Adding activities is a little less straightforward:
frst you defne the activity and then you pro-
mote it to a case activity.
Illustration 9. Design time and run time items. From: CMMN Beta 1
There are a number of things that can be changed in the case plan model:
The rules that determine when an activity is executed can change
The activities itself can change (new activities can occur for example)
The type of the activity can change (an activity that was discretionary can become a plan
item or vice versa)
A new event can be raised
A new milestone can be defned
A milestone can be attained
A milestone can be revoked
A new stage can be defned
A stage can be attained or completed
A stage can be reopened
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
8/10 Lonneke Dikmans
172 OTech Magazine #3 May 2014
Changing conditions and rules
Rules can be changed using the rule editor in the Oracle Business Process Composer at
runtime. The new rules will apply to all the cases.
The case component ofers an API to add ad-hoc tasks and raise events (using the ICa-
seEventService interface)
Aspect Add element Change conditions of an element
Scope All cases One instance All cases One instance
Who Developer Case worker Developer, Administra-
tor, Process owner,
team manager
Case worker
Timing Design time Runtime Design- and runtime Runtime
Goal Add new activi-
ties, user events,
milestones to
case to make
the case more
efcient or to
get another re-
sult
Create an ad-
hoc activity
or raise a user
event for a
specifc case
because some-
thing exception-
al happens
Process improvement,
changing the rules
when a milestone is
reached, when an activ-
ity is activated, etc.
Handle an ex-
ceptional situa-
tion by activat-
ing an activity
or attain a mile-
stone.
Tool JDeveloper User interface JDeveloper, BAM, SOA
Composer, EM, BPM
Workspace
User Interface
You can defne ad hoc tasks (that have not
been predefned) or events on a case-by-case
basis. The fgure below shows an example of
this from the EURent sample.
Illustration 11. Example of an ad-hoc task that is added by a caseworker

WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
9/10 Lonneke Dikmans
173 OTech Magazine #3 May 2014
Planning and Social Business.
http://social-biz.org/2012/09/12/case-manage-
ment-contrasting-production-vs-adaptive/
4. What is Adaptive Case Management?
Adaptive Case Management
http://acmisis.wordpress.com/what-is-adaptive-
case-management-acm/
5. Advancing BPM by adding Smart or
Intelligent. Welcome to the Real (IT) World.
http://isismjpucher.wordpress.com/category/
machine-learning/
6. Case Management Model and Notation
(CMMN). January 2013. OMG.
http://www.omg.org/spec/CMMN/1.0/Beta1/
PDF/
owners to monitor KPIs and allow them to
make the case execution more efcient.
At the moment there is no intelligence in the
case component. The case component is not
a self learning system, nor does it ofer capa-
bilities to business users (the case worker) to
promote on-of actions to case activities that
are part of the initial case plan, available to all
case workers in the organization. This is still a
development efort.
References
1. Pre-built Virtual Machine for SOA Suite and
BPM Suite 11g.
http://www.oracle.com/technetwork/middle-
ware/soasuite/learnmore/vmsoa-172279.html
2. Adaptation. Merriam-Webster. http://
www.merriam-webster.com/medical/
adaptation?show=0&t=1397299440
3. Case Management: Contrasting Production
vs. Adaptive September 2012, Collaborative
Summary
You have seen how you can adapt a single case
at runtime, change rules for all cases at runtime
and how to change the plan items in the case
plan using JDeveloper.
The caseworker can edit the case fle, reas-
sign activities to other users, choose when to
execute discretionary activities and attain and
revoke milestones in case instances.
Administrative users can change rules at runt-
ime, assign new groups, application roles and
users to existing application roles and process
roles and edit the task defnitions.
These changes apply to all cases.
Adding new structured data elements, activi-
ties, milestones or events to the initial case
plan can only be done at design time in
JDeveloper.
BPM Suite ofers a BPM space that supports
cooperation between case workers and the
BAM tooling that is part of the SOA Suite and
BPM Suite help team managers and process
WHAT DOES ADAPTIVE
IN ORACLE ACM MEAN?
10/10 Lonneke Dikmans
174 OTech Magazine #3 May 2014
ORACLE ACCESS
MANAGER:
CLUSTERS,
CONNECTION
RESILIENCE AND
COHERENCE
Robert Honeyman
www.honeymanit.co.uk
twitter.com/Honeyman_IT
www.facebook.com/
HoneymanItConsulting
uk.linkedin.com/in/
roberthoneyman/
175 OTech Magazine #3 May 2014
In my previous article for OTech I provided a set
of high-level considerations for a High Availabil-
ity Oracle Access Manager and Oracle Internet
Directory solution for Single Sign-On. I outlined
the solution topology and some of the key
design and implementation considerations.
In this article I want to elaborate on some of
these OAM and OID related topics which were
not covered in depth in the previous article.
I will discuss the confguration and options
required to build an OAM cluster, provide detail
on confguring High Availability for database
connections and outline the use of Coherence
and its provision of OAM High Availability
features.
OAM cluster confguration example
To start I provide some detail on building a ba-
sic two-node OAM cluster confguration. I start
from the assumption that basic Fusion Middle-
ware host and operating system pre-requisites
have been satisfed and the OAM repository
has been created with RCU prior to embarking
on the installation.
The following preparatory step must be
performed independently of the deployment
hosts:
Prepare shared storage for your domain
AdminServer directory (NFS / Clustered File
System)
The main preparatory steps to be performed
on both deployment hosts (oamhost1, oam-
host2) are:
Install a JDK and Weblogic 10.3.6
Install the OAM 11g Release 2 software
(Install only option)
Mount the shared storage directory
As shown in the steps above I prefer perform-
ing an Install only deployment initially. The
deployment can then be used as template for
other confgurations or easily rolled back or
restored without starting from scratch. If using
a virtual machine or versioned storage then
you can snapshot your installation, if not then
you always have tar or zip to take backups for
subsequent restores.
Once you have completed the preparatory
steps you can launch the OAM domain con-
fguration wizard using confg.sh. This must
be the version from the OAM ORACLE_HOME/
common/bin directory to ensure you have the
OAM confguration options available to you.
Select the Oracle Access Management and
Oracle Enterprise Management options, OPSS
and JRF will also be automatically selected as
dependencies.
You will then be asked for the domain and
application location, here you can specify the
shared storage you prepared earlier for your
master domain directory. The Weblogic domain
startup mode should be Production Mode as
there should be no need for classloader trac-
ing or other developer features with an of the
shelf Fusion Middleware application. We also
want the safety provided by Weblogic lock and
edit change management provided in Produc-
tion Mode for the OAM domain.
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
1/7 Robert Honeyman
176 OTech Magazine #3 May 2014
Figure 2 Confgure OAM cluster
Having prepared your managed servers and
named your cluster you will need to associate
your managed servers wls_oam1 and wls_oam2
with your cluster oamcluster. The Assign Sev-
ers to Clusters screenshot below shows this
association between the managed servers and
the cluster.
Figure 1 OAM cluster confgure managed servers
The OAM confguration tool will ask for cluster
confguration on the Confgure Clusters dia-
logue you need to name the cluster, as shown I
used oamcluster in the example below. You do
not need to specify the cluster address which
will be automatically derived. Unicast is the
preferred protocol for WebLogic inter-cluster
communication these days, so retain the de-
fault setting.
When presented with the Optional Confgu-
ration be sure to select the Administration
Server and also the Managed Servers, Clusters
and Machines option. This will allow confgura-
tion of the OAM Weblogic cluster topology.
To confgure for high availability the Admin
Server listening address should be a host-inde-
pendent foating IP address to enable the Ad-
min Server to run on any host. This is standard
Fusion Middleware practice as only one Admin
Server can exist with a Weblogic domain.
You must confgure two OAM managed serv-
ers each using the Fully Qualifed Domain Name
(FQDN) of the two OAM hosts. The example
shown in Confgure Managed Servers screen-
shot uses managed server names wls_oam1
and wls_oam2, but you can select an alterna-
tive name if you prefer such as oamserver1 and
oamserver2.
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
2/7 Robert Honeyman
177 OTech Magazine #3 May 2014
Connection resilience for
OAM database connections
OAM uses JDBC to connect to the OAM reposi-
tory database and the preferred approach for
connection resilience for Oracle RAC databases
is to use WebLogic GridLink Data Sources.
GridLink Data Sources have advantages over
Multi Data Sources including:
Fast Connection Failover for enhanced con-
nection failure detection and management
Fast Application Notifcations (FAN) for intel-
ligent load balancing and session afnity
The ability to use Oracle RAC SCAN addresses
Note that if you are not using an Oracle RAC da-
tabase for your OAM repository, but an alterna-
tive DBMS you will not be able to use GridLink
Data Sources and will have to use Multi Data
Sources.
As outlined above when I discussed building a
cluster you will be asked by the OAM confgura-
tion tool to specify database connection de-
tails as the screenshot below illustrates. Upon
Figure 4 OAM cluster target deployments
If all goes well your cluster confguration will
complete and the confguration assistant takes
care of the leg work behind the scenes. As
discussed in my previous article there are a few
additional steps to ensure your OAM cluster is
fully operational, these are:
Confgure load balancing for HTTP e.g. OAM
credential collector (login page) and logout
Set OAM RequestCacheType setting to
COOKIE
Confgure the front-end host for the OAM
cluster
Deploy Webgate and policies for you secured
applications
Figure 3 OAM managed server cluster assignment
The confguration assistant will also request
database connection information, but I provide
more detail on this in the next part of the arti-
cle. Through the remaining steps of the confg-
uration you will need to confgure node man-
agers for your OAM hosts, one per node, and
ensure the OAM application deployments are
targeted at the cluster as opposed to individual
managed servers. The fnal screen shot below
shows the OAM application oam_server and
oamsso_logout deployment targets for oam-
cluster, and the administrative em and oam_ad-
min (oamconsole) applications deployed to the
Admin Server.
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
3/7 Robert Honeyman
178 OTech Magazine #3 May 2014
I dont cover building an OID Identity Man-
agement cluster here, but while on the topic
of database connection resilience I thought I
would cover how to set up OID to highlight the
diferences in the connection method when
compared to OAM.
As outlined in my previous article OID connects
to its back-end database using OCI so Transpar-
ent Application Failover (TAF) must be used for
connection resilience.
Prior to OID confguration using Oracle Univer-
sal Installer (OUI) you must confgure a TAF
enabled service in the OID RAC database clus-
ter using srvctl on a cluster member database
server. The form of the srvctl add command is
shown below:
srvctl add service \
-d oid_database_name \
-s oid_service_name \
-r oid_preferred_rac_inst_1,oid_preferred_rac_
inst_2 ...\
-q aq_ha_notifcations_fag \ (TRUE|FALSE)
-m failover_method \ (SELECT)
-e failover_type \ (BASIC)
-w failover_delay_in_secs \ (5 recommended)
-z failover_retries (5 or less)
nication transport for notifcations from the
RAC cluster. Again you can use the SCAN
address, and the example shown uses the
default ONS port 6200.
4. Enable FAN by selecting the corresponding
check box. This will ensure the OAM applica-
tion servers listen for health and status notif-
cations from the RAC cluster. These notifca-
tions will be provided through the ONS client
transport outlined in step 3.
Connection resilience for
OID database connections
The latest Oracle Identity Management 11.1.2.2
Enterprise Deployment Guide specifes the use
of Oracle Unifed Directory as the directory ser-
vice. However as outlined in my previous article
OUD is still not certifed with core legacy tech-
nologies such as Oracle Forms and Reports,
only OID is. Clearly OUD is the way forward for
Oracle Identity Management in the longer term
as OID is not receiving active development. In
the meantime OID is still the only option when
confguring SSO for the complete Fusion Mid-
dleware stack.
selecting the Oracle Driver (Thin) for GridLink
Connections driver option you will be present-
ed with the confguration components shown
in the screenshot.
Figure 5 OAM confgure GridLInk RAC connections
I discuss the meaning of these options below.
1. The Database Service Name should be con-
fgured with a value of your service_names
parameter for your OAM database.
2. The SCAN address or host alias and the SCAN
listener port are specifed in the Service Lis-
tener section. SCAN will manage the routing
and load balancing to available RAC VIPs.
3. Specify the ONS (Oracle Notifcation Service)
client confguration to enable the commu-
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
4/7 Robert Honeyman
179 OTech Magazine #3 May 2014
connect to the RAC service created using srvctl
prior to installation. As a result of the server-
side TAF policy being confgured on the RAC
database cluster, when OID connects it will
adopt the TAF policy assigned through srvctl
and failover will be enabled.
Coherence and Oracle Access Manager
The fnal topic I will discuss in this article is how
Coherence is used within an OAM cluster. The
Coherence distributed cache system has two
primary functions in an Oracle Access Manager
confguration. These both aid the availability of
the Oracle Access Manager infrastructure.
Distribute confguration from the oamcon-
sole application amongst clustered servers
Maintain server-side user session information
in a resilient and distributed cache
When a change is made to the OAM confgura-
tion in the oamconsole application the change
is written to the OAM data stores but is also
distributed by Coherence to the OAM managed
servers. This use of Coherence allows real time
updates of confguration, policies and session
confguration you will be asked for the OID da-
tabase connection details by the confguration
tool. The connection details are specifed in the
form shown below to the OID confguration
utility when using a RAC cluster.
dbhost1-vip:1521:idmdb1^dbhost2-vip:1521:idmdb2@oiddb
The RAC instance targets are separated by a
caret. This information is used to create the
initial connection to the OID database for
confguration and generate connection infor-
mation. The information is not used in this
form for connections once OID is operational,
instead OID uses tnsnames.ora. Note that the
RAC SCAN address is not specifed, while using
SCAN may well work its use is not specifed in
the OID documentation .
After confguration of OID is complete, a
barebones TNS database service confguration
will be present in a tnsnames.ora fle in the
ORACLE_INSTANCE/confg directory. This TNS
service confguration on the OID hosts will not
contain TAF parameters, but will be used to
For those of you not familiar with RAC con-
fgurations I will briefy discuss a few of these
parameters in relation to OID.
aq_ha_notifcations when set to TRUE
this enables logging of database HA status
events to the OID logs, this is the recom-
mended setting. If you really do not wish to
record these events you can set to FALSE.
failover_method the SELECT option fails
over the OID database session and resumes
any read operations from the point the data-
base connection failed. This is the preferred
approach for OID.
failover_type the BASIC option enables
TAF, without this set TAF is disabled. Note
PRECONNECT is not supported for server-
side TAF at least for the Oracle 11g Database.
After confguring the service for OID you must
start the service using srvctl start <service-
name> and Oracle recommend to validate the
service and check parameter confguration us-
ing srvctl status and srvctl confg.
Assuming you have the TAF enabled database
service confgured and have proceeded to OID
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
5/7 Robert Honeyman
180 OTech Magazine #3 May 2014
uted cache to replicate to another node. A new
primary copy of the user session is reinstated
to the local cache of a member server when a
request is made to the OAM cluster. This allows
the user session to continue uninterrupted. The
diagram below illustrates the state of a single
user session cached in an OAM server cluster
confguration before and after a failure.
as Oracle Advanced Security can be employed.
A copy of a users OAM session is maintained
in the local cache of the OAM cluster member
server where the session is active. A secondary
copy is maintained on other Coherence cluster
servers in the distributed cache. In the event
that the server holding the local copy is lost
Coherence uses a secondary copy in the distrib-
management changes to the OAM services
without the need for application or server
restarts. Clearly restarting services would be
highly impractical with such a critical infrastruc-
ture system so propagating these changes to
the managed servers is a key requirement.
Coherence clusters also store and communi-
cate OAM user session information, The OAM
user session information includes user informa-
tion such as session lifetime information, idle
timeout, state and persistence options. Coher-
ence servers communicates with other cluster
members via UDP using encrypted communi-
cations secured with mutual SSL. Only servers
which have a recognized SSL trust relationship
are able to participate in these communica-
tions. This provides wire security but session
information is not encrypted by Coherence in
memory or in the OAM database if one is used
for persisted sessions from the distributed
cache. In the event that a database is not used
sessions evicted from cache are stored unen-
crypted on the fle system. Additional measures
to encrypt database or fle system storage such
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
6/7 Robert Honeyman
Figure 6 OAM Coherence and user session cache - Before failure
181 OTech Magazine #3 May 2014
some advantages for large installations such as
reduced memory and client trafc, but also has
more limited functionality for enterprise con-
fgurations. Client-side session management is
an option you can explore if you require high
throughput for large user bases.
With Oracle Access Manager 11g Release 2
it is now possible to use client-side session
management as an alternative to the default
server-side Coherence option. In the client-side
confguration the users session information
is stored only in a client-side cookie and the
OAM server becomes stateless. This ofers
ORACLE ACCESS MANAGER: CLUSTERS,
CONNECTION RESILIENCE AND COHERENCE
7/7 Robert Honeyman
Figure 7 OAM Coherence and user session cache - After failure
182 OTech Magazine #3 May 2014
QUMU &
WEBCENTER -
BRINGING
VIDEO TO THE
ENTERPRISE
Jon Chartrand
www.teaminformatics.com
www.linkedin.com/in/
jonchartrand
183 OTech Magazine #3 May 2014
They say a picture is worth a thousand words. If this is true, then 30
frames per second of HD quality video must be worth volumes. We all
know the rise in consumer video has been meteoric. With more than a
billion unique users a month spending more than 4 billion hours watch-
ing videos, Googles YouTube is the undisputed champion of online video
and consumers continue to clamor for more. With video rated as 6 times
more efective than print, businesses have been keen on leaping into
the advertising and marketing side of the video trend, and with no small
amount of success. Whats been a much slower sell, however, is use of
this medium for non-marketing activities in the enterprise and the rea-
sons for that can be summed up in three barriers: ownership, features,
and integration.
When you compare the functionality of online video providers catering
to consumers (coughYouTubecough) against the needs of a world-class
enterprise the immediate result is a glaring gap. Combined with the
realization that online video users are expected to double to 1.5 billion
by 2016 and yet, as of now, only 24% of national brands are using online
video this indicates theres a need to be flled through providing tailored,
enterprise-class video services integrated into existing platforms. More
specifcally, a service that addresses those barriers to enterprise use...
This is where I happily introduce you to Qumu (koo-moo) and their prod-
uct Video Control Center. Shake hands, youre going to be friends.
While Qumu has already started making ingress to the enterprise mar-
ket through successful partnerships with the likes of Vodaphone, Dow
Chemical, and Safeway, my purpose here is to expand your insight when
it comes to integrating enterprise video with Oracles WebCenter plat-
form. As an architect of WebCenter projects and installations, Ive seen
frst-hand how some or all of the WebCenter applications can be used to
great efect across all realms of the business. However, while the Digital
Asset Management services of WebCenter are powerful, there is a func-
tionality gap when it comes to true enterprise video services. This is my
focus for today: introducing you to the functionality of a true enterprise
video service and discussing how we can bring that power right into a
WebCenter installation - no matter which pieces youre using.
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
1/6 Jon Chartrand
184 OTech Magazine #3 May 2014
Barrier 2: Enterprise-Class Features
The frst, and in my opinion, most important feature of any enterprise
class service is a robust SDK API structure. Qumu provides access to APIs
which allow management of video content (browse, search, organize,
publish, and view), authentication, adjustable security, and customizable
video players. With support for HTML5, Flash, and Silverlight players,
social engagement via ratings, comments, and sharing, and enablement
of live webcasts, the API is incredibly strong and allows you to fully inte-
grate not only the interfaces but the functionality of Video Control Center
into your infrastructure.
Barrier 1: Owning Your Content
Lets start at the beginning with the frst barrier: ownership. Several
clients that began looking into using video internally have run up against
the legal barrier of YouTubes Terms of Service. While there have been
many analyses of YouTubes TOS language, the generally accepted short
version is that by uploading to the service the uploader is granting You-
Tube rights to the content identical to those of the owner without actu-
ally conferring ownership.
This means, if it chose to, YouTube could appropriate all or part of any of
your videos for use in their own promotional materials. While the same
analyses have concluded that such appropriation is unlikely and the right
extends only until the video is deleted, this has simply been a legal bridge
too far. When it comes to advertising or marketing related materials, the
quandary is minimal. For internal materials such as orientations or train-
ing, the potential ramifcations are usually too much to leave to chance.
Qumus TOS for Video Control Center has your business not only retain-
ing full ownership but, unlike YouTube, it doesnt transfer use-rights to
Qumu. This means your sensitive or internal materials remain that way.
Beyond this, Qumus Video Control Center API - how well integrate
features and functionality with the enterprise - includes capabilities for
user authentication, access controls for permissions, and singular private
codes to ensure confdentiality.
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
2/6 Jon Chartrand
185 OTech Magazine #3 May 2014
Taken a step further, Qumu not only ofers their own Content Delivery
Network, VideoNet Edge, but their product also works with pre-existing
3rd party CDNs such as Cisco, BlueCoat, Akamai, and AT&T. You even
get access to real-time data so you can see bytes transferred, active and
failed requests and other performance metrics. Essentially, Qumu has
done everything short of handing your viewers a BluRay disc to make
sure they experience the best quality and your network remains solid.
Before a user can view a video, they need to fnd the video. Qumus
Speech Search indexes the audio of each video and using proprietary al-
gorithms parses out the phonemes (the individual sounds which make up
a word) instead of the words. This results in higher accuracy than typical
speech-to-text methods. Each video receives a phonetic index fle which
is searched against to return weighted results. This means returns are
generated quickly and accurately - even for dialects within languages.
Finally, Qumu also ofers features to help you create content as well as
distribute it. Their Quick Capture tool allows for browser-based produc-
tion of content spanning screen recording, webcam video, and audio.
This means high-quality content can be produced quickly and easily. Even
better, Qumu ofers live broadcasting so events like quarterly meetings
or executive town halls can be ofered out to those unable to attend in
person.
When dealing with video content, one thing we always want to be wary
of is network saturation. Our corporate networks may not necessarily be
delicate but a wise administrator always keeps an eye on utilization.
When it comes to delivering streaming content, Video Control Center has
your back with multi-format transcoding and the Qumu Pathfnder. Basi-
cally, when you upload your raw video VCC transcodes that into several
diferent formats and bitrates that you confgure. When requested to de-
liver the stream, Pathfnder automatically detects who the user is, what
type of device is being used, and what network the user is connected to,
and then serves the appropriate version of the video.
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
3/6 Jon Chartrand
186 OTech Magazine #3 May 2014
The goal of any addition to an enterprise architecture is not to simply slap
in a new application or website but to integrate functionality into existing
structures. We want to marry the old and new to improve whats there
and make advances in functionality, scalability, durability, and/or support-
ability. This is how weve approached the concept of bringing enterprise
video capabilities in WebCenter from theory to practice. The idea is to
approach the integrations from either a surface perspective or from a
depth perspective. Lets look at examples of both.
From the surface perspective, we attack the concept by integrating
Video Control Center functionality into surfaced applications and editorial
interfaces. Bringing the Video Control Center interfaces into your existing
applications is a simple matter. You can choose to rebuild the functional-
ity via the API or just surface the interface via iframe. Below, you can see
a custom portal that has been developed and includes the native VCC My
Programs interface.
What weve covered here are, in my opinion, the level of features re-
quired of an enterprise class video service. However we dont want this
set of features to live in a bubble, so the next question is, how do we get
them to ft into the tapestry that is an existing WebCenter installation.
The answer, thankfully, is easily.
Barrier 3: Integration into Existing Infrastructure
What distinguishes an enterprise from a small business is infrastructure;
pieces put in place to ensure continuity of operation, consistency of pro-
cesses, or scalability of action. For my clients, one of the primary pieces of
infrastructure tends to be one or more components of Oracles WebCent-
er platform. This means theyre focused on owning and managing their
content, building communities of structured collaboration, publishing
dynamic and scalable web properties, or any combination of these. Each
are inevitably tall orders for any business to achieve (and achieve well)
but the WebCenter platform has proven itself again and again.
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
4/6 Jon Chartrand
187 OTech Magazine #3 May 2014
The same integration can be added to the CK Editor used by both Sites
and Content for their respective contribution actions. This makes the
front-end integration virtually seamless for editors and contributors to
enterprise web properties across any WebCenter application.
Now while this level of integration works for the editorial side of the
equation, what it still doesnt accomplish is fulflling the mission of true
content management. Those MP4 fles youre uploading to Video Con-
trol Center still exist in your ecosystem - possibly as unstructured content
living on a shared drive somewhere and that, my friends, is no way to live.
Youre losing the benefts that come with structured/managed content
which is why you (hopefully) have WebCenter Content in the frst place.
Lets picture how this integration would happen, utilizing the VCC API and
WebCenter Contents powerful service-oriented architecture.
Instead of living unstructured, you check your raw MP4 into the reposi-
tory like all the rest of your content. Its organized by metadata, secured
by roles, revisioned as-needed, and stored in a secure location. Now we
make use of the Qumu APIs and automatically upload the raw fle to
your Video Control Center library. Metadata tags from Content provide
Video Control Center with basic information such as title and also what
transcoding options were chosen. The API acknowledges the fle and re-
sponds with its own vital information; transcoding data, pathfnder data,
search index data, all of which is added to the metadata of the original
content entry.
From here, you have access to your entire video library, the ability to
upload additional fles as you create them, and even access to the Quick
Capture tool to use as you need. From the editorial side, merging VCC vid-
eos into your content is just a matter of customizing the right editor with
additional capabilities which access the APIs retrieval and embedding
functions. Below you can see the simple addition of a video button allows
an editor to choose and place a video within a blog post.
(Go to http://goo.gl/MB2Kj0 to see it as-captured by the
Quick Capture tool.)
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
5/6 Jon Chartrand
188 OTech Magazine #3 May 2014
This means any other application in your ecosystem could use WebCenter
Contents services to access and utilize that pointer. As you update ver-
sions of your video, the pointer fle is updated as well and, as follows, all
applications accessing it dynamically receive the updated information.
WebCenter Content retains all the revision information including the
original MP4 fles so you can go back at any time. Additionally, WebCent-
er Portal and WebCenter Sites (via connector) each capably share from
the WebCenter Content repository. This allows you to unify not only your
standard structured content but your streaming video as well.
The deeper integration represents a true fulfllment of owning your
content when it comes to enterprise video. When combined with the
editorial/surface integrations, your enterprise will be truly meshed with
the Video Control Center functionality and capability. This is the true end-
project goal for an enterprise integration.
Hopefully through the course of this treatise youve seen an example of
what an enterprise-class video service can and should ofer but also just
how that service can be integrated into an enterprise through the Oracle
WebCenter platform. Im excited to be able to bring this pillar of service
to our clients around the world and enable them to communicate better
both internally and externally without missing a beat when it comes to
integrating with their existing ecosystem.
For those of you unfamiliar with primary and secondary fles in Web-
Center Content, think of the traditional roles for these to be played by a
software installer application (primary) and the readme fle which ac-
companies it (secondary). Both would be checked into Content under the
same ID but be available separately. Used non-traditionally, our service
would swap the MP4 from primary to secondary and then create a stub
or pointer fle containing all the vital Video Control Center information
about the video in its place. With the information in this pointer fle, virtu-
ally any application in your enterprise could be quickly and easily directed
to the stream of that video.
QUMU & WEBCENTER - BRINGING
VIDEO TO THE ENTERPRISE
6/6 Jon Chartrand
189 OTech Magazine #3 May 2014
ORACLE
DATA GUARD
12C:
NEW FEATURES
Mahir M Quluzade
www.mahir-quluzade.com
twitter.com/marzade
linkedin.com/in/mahirquluzade
190 OTech Magazine #3 May 2014
It is my frst article for OTech Magazine. In this article Ill touch upon some
new features of Oracle Data Guard 12c.
Overview Data Guard
Oracle Data Guard ensures high availability, data protection, and disaster
recovery for enterprise data. Data Guard provides a comprehensive set of
services that create, maintain, manage, and monitor one or more standby
databases to enable production Oracle databases to survive disasters and
data corruptions. A Standby database is a copy of the primary (produc-
tion) database. Then, if the primary database becomes unavailable Data
Guard can switch any standby database to the production role.
Figure 1: Data Guard Confguration
A Data Guard confguration consists of one database that in the primary
role and one or more (max 30) databases that in the (Physical, Logical or
Snapshot) standby role and data guard services. Redo Transport service
transmits redo data from the primary database to the standby database
in the confguration. The redo data transmitted from the primary data-
base is written to the standby redo log on the standby database. Apply
services automatically apply the redo data on the standby database to
maintain consistency with the primary database. Redo Apply run on
Physical standby and SQL Apply runs on Logical standby database. Role
transition service initiate a role transition between the primary database
and one standby database in the Data Guard confguration.
Data Guard Confgurations have three protection modes: Maximum
Availability, Maximum Performance and Maximum Protection. All three
protection modes require that specifc redo transport options be used to
send redo data to at least one standby database. Data Guard ofers two
choices of transport services: synchronous (SYNC) and asynchronous
(ASYNC).
You can use SQL*Plus to manage primary and standby databases and
their various interactions. Data Guard also ofers a distributed manage-
ment framework called the Data Guard broker, which automates and
centralizes the creation, maintenance, and monitoring of a Data Guard
confguration. Also you can use Oracle Enterprise Manager GUI interface
to manage Data Guard confgurations.
ORACLE DATA GUARD 12C:
NEW FEATURES
1/14 Mahir M Quluzade
191 OTech Magazine #3 May 2014
Create Standby Database on Multitenant Container Database (CDB)
Oracle Database 12c introduces a new multitenant architecture [3] that
makes it easy to deploy and manage database clouds.
Figure 2: Oracle Database 12c Multitenant Architecture
We can create standby database (either physical or logical) only on Mul-
titenant Container Database (CDB), not on Pluggable Databases (PDBs)
because in multitenant architecture CBD and all PDBs are using same
control fle and only a common instance. So PDBs use common control
fle and common background processes (for example: LGWR). It means
you cannot create standby database for a PDB.
Online move data fles
Prior to Oracle Database 12c, you could only move the location of an on-
line data fle if the database was down or not open, or by frst taking the
Administration privilege: SYSDG
Oracle Database 12c provides an Oracle Data Guard-specifc administra-
tion privilege, SYSDG, to handle standard administration duties for Oracle
Data Guard. The SYSDBA privilege also continues to work as in previous
releases. The SYSDG privilege enables the startup, shutdown, alter data-
base and etc. [1] operations. In addition, the SYSDG privilege enables you
to connect to the database even when it is not open, as SYSDBA privi-
lege.
USING CURRENT LOGFILE clause is deprecated
When preparing the primary database for standby database creation,
best practice is to create Standby Redo Logs (SRLs) on primary database.
In this case your primary database will be ready to quickly transition to
the standby role and begin receiving redo data. SRLs are important for
Redo Apply process when using Real-Time Apply, also important for
Maximum Protection and Maximum Availability protection modes. With
Oracle Database 12c, creation of SRLs is important step of Preparing the
Primary Database for Standby Database Creation [2]. Because by default
Redo Apply process uses Real Time apply and USING CURRENT LOGFILE
clause is not required for start Real-Time Apply. In other words ALTER DA-
TABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION command starts the apply process as Real-Time Apply.
ORACLE DATA GUARD 12C:
NEW FEATURES
2/14 Mahir M Quluzade
192 OTech Magazine #3 May 2014
Required Redo Transport Attributes for Data Protection
Modes in Oracle Database 12c
Maximum
Availability
Maximum
Protection
Maximum
Performance

AFFIRM or NOAFFIRM

AFFIRM

NOAFFIRM

SYNC

SYNC

ASYNC

DB_UNIQUE_NAME

DB_UNIQUE_
NAME

DB_UNIQUE_NAME
Prior to Oracle Database 12c for Maximum Availability mode SYNC AF-
FIRM was required for attribute LOG_ARCHIVE_DEST_n parameter, but
with Oracle Database 12c we can use SYNC NOAFFIRM with Maximum
Availability protection mode. This feature name is FAST SYNC. [5]
Fast Sync provides an easy way of improving performance in synchronous
zero data loss confgurations also allows a standby to acknowledge the
primary database as soon as it receives redo in memory, without waiting
for disk I/O to a standby redo log fle.
How to confgure FAST SYNC?
It is very easy. So, you need only change LOG_ARCHIVE_DEST_2 param-
eter attributes. If you Data Guard Confguration are managed non-broker,
then you must change LOG_ARCHIVE_DEST_2 as below. Connect to
SQL*Plus on primary side:
fle ofine. But with Oracle Database 12c you can online move data fles.
An online move of the data fle operation is performed with the ALTER
DATABASE MOVE DATAFILE statement. It increases the availability of the
database because it does not require the database to be shut down in or-
der to move the location of an online data fle. You can perform an online
move data fle operation independently on the primary and on the stand-
by (either physical or logical). The standby is not afected when a data fle
is moved on the primary, and vice versa. But there are some restrictions
[4] on online moving data fles on databases where data guard confgura-
tions are set-up.
Synchronous Transport Redo: FAST SYNC
Maximum Protection and Maximum Availability protection modes require
synchronous (SYNC) transport in Data Guard confguration. The SYNC
redo transport process transmits redo changes from primary to standby
database synchronously with respect to transaction commitment. A
transaction cannot commit on primary database until all redo generated
by that transaction has been successfully sent to standby databases that
uses the synchronous redo transport mode.
Note: There is no limit on the distance between a primary database and
SYNC redo transport destination. Transaction commit latency increases
as network latency increases between a primary database and SYNC redo
transport destination.
ORACLE DATA GUARD 12C:
NEW FEATURES
3/14 Mahir M Quluzade
193 OTech Magazine #3 May 2014
DGMGRL> show confguration
onfguration - dg
Protection Mode: MaxPerformance
Databases:
prmcdb - Primary database
stbcdb - Physical standby database
Fast-Start Failover: DISABLED
Confguration Status:
SUCCESS
DGMGRL> EDIT DATABASE prmcdb SET PROPERTY LogXptMode=FASTSYNC;
Property logxptmode updated
DGMGRL> EDIT DATABASE stbcdb SET PROPERTY LogXptMode=FASTSYNC;
Property logxptmode updated
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded
DGMGRL> show confguration
Confguration - dg
Protection Mode: MaxAvailability
Databases:
prmcdb - Primary database
stbcdb - Physical standby database
Fast-Start Failover: DISABLED
Confguration Status:
SUCCESS
SQL> select cdb,name,database_role,protection_mode from v$database;
CDB NAME DATABASE_ROLE PROTECTION_MODE
---- ------ ------------- ---------------
YES PRMCDB PRIMARY MAXIMUM AVAILABILITY
SQL> select value from v$parameter where name = log_archive_dest_2;
VALUE
-------------------------------------------------------------------------------------
-----
service=stbcdb, SYNC AFFIRM db_unique_name=stbcdb, valid_for=(online_logfle,all_
roles)
SQL> alter system set log_archive_dest_2 =
2 service=stbcdb SYNC NOAFFIRM db_unique_name=stbcdb
3 valid_for=(online_logfle,all_roles);
SQL> select value from v$parameter where name = log_archive_dest_2;
VALUE
-------------------------------------------------------------------------------------
-------
service=stbcdb, SYNC NOAFFIRM db_unique_name=stbcdb, valid_for=(online_
logfle,all_roles)
same As you see, in my case primary database (prmcdb) and standby
database (stbcdb) is Multitenant Container Database (CDB). At time I
have broker-managed data guard confguration with the same database
names.
If your Data Guard Confguration is broker-managed, then you must use
Data Guard Manager Command-Line (DGMGRL). So, you must change
LogXptMode property of databases in confguration to FASTSYNC value.
Connect to DGMGRL as SYSDG:
ORACLE DATA GUARD 12C:
NEW FEATURES
4/14 Mahir M Quluzade
194 OTech Magazine #3 May 2014
Figure 4: Role Transition - Failover
Manual failover is initiated by the DBA using the Oracle Enterprise Man-
ager GUI interface, the Data Guard brokers command line interface, or
SQL*Plus. Optionally, Data Guard can perform automatic failover using
Fast-Start Failover only in broker-managed Data Guard confguration.
Oracle Database 12c introduces new SQL syntax for performing switcho-
ver and failover operations to a physical standby database. [6]
Pre-12c Role Transition Syntax for Physical
Standby Databases
12c Role Transition Syntax for Physical
Standby Databases
To switch over to a physical standby
database, on the primary database:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER
TO PHYSICAL STANDBY;
On the physical standby database:
SQL>ALTER DATABASE COMMIT TO SWITCHOVER
TO PRIMARY;
To switch over to a physical standby data-
base:
SQL> ALTER DATABASE SWITCHOVER TO tar-
get_db_name[FORCE][VERIFY];
To failover to a physical standby database
SQL> ALTER DATABASE RECOVER MANAGED
STANDBY DATABASE FINISH;

SQL> ALTER DATABASE COMMIT TO SWITCHO-
VER TO PRIMARY;
To failover to a physical standby data-
base, the following statement replaces
the two statements previously required:
SQL> ALTER DATABASE FAILOVER TO target_
db_name;
DGMGRL> show database prmcdb LogXptMode
LogXptMode = fastsync
Note: You can confgure FAST SYNC for Maximum Availability protection
mode also with Oracle Enterprise Manager Cloud Control 12c.
New syntax of Role Transition
Data Guard can change database roles in confguration. Role transi-
tion services function is role transition between the primary database
and one standby database in the Data Guard confguration. Oracle Data
Guard supports the following role management services: Switchover and
Failover.
A switchover is always a zero data loss operation regardless of the trans-
port method or protection mode used. A failover brings a standby online
as the new primary during an unplanned outage of the original primary
database.
Figure 3: Role Transition - Switchover
A failover does not require the standby database to be restarted in order
to assume the primary role.
ORACLE DATA GUARD 12C:
NEW FEATURES
5/14 Mahir M Quluzade
195 OTech Magazine #3 May 2014
Apply State: Not Running
Apply Lag: 59 seconds
Apply Delay: 0 minutes
Current Log File Groups Confguration:
Thread # Online Redo Log Groups Standby Redo Log Groups
(prmcdb) (stbcdb)
1 3 2
Future Log File Groups Confguration:
Thread # Online Redo Log Groups Standby Redo Log Groups
(stbcdb) (prmcdb)
1 3 2
Ready for Switchover status shows target whether standby database is
ready or not for switch role.
Far Sync Zero Data Loss Protection at any Distance
In many Data Guard confgurations, primary database sends redo
changes to standby database(s) using asynchronous (ASYNC) transport.
In maximum performance protection mode when the primary fails, we
may experience some data loss. Prior to Oracle Database 12c, we used
synchronous transport to achieve zero data loss. Sometimes it is not vi-
able option because of the impact on the commit response times at the
primary due to network latency between the two databases.
Oracle Database 12c introduces new FAR SYNC instance [8]. Far sync in-
stance is archive destination that accepts redo from the primary database
and then sends that redo to other standby database(s) of the Oracle Data
Guard confguration.
As you see from new switchover statement have VERIFY option. This
statement is very useful for validation target standby database, so VERIFY
checks are being performed for many conditions required for switchover.
Verify operation write alerts to alert log fle, when fnd any problem for
switchover. If there havent any problem then you are getting Database
altered after call switchover statement with VERIFY option:
SQL> alter database switchover to stbcdb verify;
ERROR at line 1:
ORA-16470: Redo Apply is not running on switchover target
It means standby database (stbcdb) is not ready to switch role to prima-
ry. Because Redo Apply process is stopped on target standby database
and last archived logs may not be applied.
In broker-managed Data Guard confgurations you can do this task with
VALIDATE DATABASE command as below: [7]
DGMGRL> validate database stbcdb
Database Role: Physical standby database
Primary Database: prmcdb
Ready for Switchover: No
Ready for Failover: Yes (Primary Running)
Flashback Database Status:
prmcdb: Off
stbcdb: Off
Standby Apply-Related Information:
ORACLE DATA GUARD 12C:
NEW FEATURES
6/14 Mahir M Quluzade
196 OTech Magazine #3 May 2014
far sync instance) with higher protection guarantees. So, if the far sync
instance was synchronized at the time of the failure of primary database,
the far sync instance would coordinate a fnal redo send from the far sync
instance to the standby then perform a zero-data-loss failover.
How to confgure FAR SYNC instance?
I use my broker-managed Data Guard confguration and I will create Far
sync instance in same server of primary database.
DGMGRL> show confguration
Confguration - dg
Protection Mode: MaxPerformance
Databases:
prmcdb - Primary database
stbcdb - Physical standby database
Fast-Start Failover: DISABLED
Confguration Status:
SUCCESS
1. Create necessary folders for Far Sync instance.
mkdir -p /u01/app/oracle/oradata/prmfs
mkdir -p /u01/app/oracle/admin/prmfs/adump
Figure 5: FAR SYNC Instance in Data Guard Confguration
A far sync instance manages a control fle, receives redo into standby
redo logs (SRLs), and archives those SRLs to local archived redo logs also
does not have user data fles, cannot be opened for access, cannot run
redo apply, and can never function in the primary role or be converted to
any type of standby database.
Creating a far sync instance has a beneft of minimizing impact on commit
response times (due to the smaller network latency between primary and
ORACLE DATA GUARD 12C:
NEW FEATURES
7/14 Mahir M Quluzade
197 OTech Magazine #3 May 2014
prmfs.__oracle_base=/u01/app/oracle#ORACLE_BASE set from environment
prmfs.__pga_aggregate_target=281018368
prmfs.__sga_target=524288000
prmfs.__shared_io_pool_size=16777216
prmfs.__shared_pool_size=167772160
prmfs.__streams_pool_size=0
*.archive_lag_target=0
*.audit_fle_dest=/u01/app/oracle/admin/prmfs/adump
*.audit_trail=db
*.compatible=12.1.0.0.0
*.control_fles=/u01/app/oracle/oradata/prmfs/control01.ctl
*.log_fle_name_convert=prmcdb,prmfs
*.db_block_size=8192
*.db_domain=
*.db_name=prmcdb
*.db_unique_name=prmfs
*.db_recovery_fle_dest=/u01/app/oracle/fast_recovery_area
*.db_recovery_fle_dest_size=4800m
*.dg_broker_start=TRUE
*.diagnostic_dest=/u01/app/oracle
*.dispatchers=(PROTOCOL=TCP) (SERVICE=prmfsXDB)
*.enable_pluggable_database=true
*.log_archive_dest_1=location=USE_DB_RECOVERY_FILE_DEST,valid_for=(ALL_LOGFILES,
ALL_ROLES)
*.log_archive_format=%t_%s_%r.dbf
*.log_archive_max_processes=4
*.log_archive_min_succeed_dest=1
*.log_archive_trace=0
*.memory_target=768m
*.open_cursors=300
*.processes=300
*.remote_login_passwordfle=EXCLUSIVE
*.standby_fle_management=MANUAL
*.undo_tablespace=UNDOTBS1
2. Create initialization parameter fle and control fle for Far Sync instance.
[oracle@oel62-prmdb-12c ~]$ export ORACLE_SID=prmcdb
[oracle@oel62-prmdb-12c ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Feb 12 16:42:31 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> create pfle=/u01/prmfs_pfle.ora from spfle;
File created.
SQL> alter database create far sync instance controlfle as /u01/app/oracle/oradata/
prmfs/control01.ctl;
Database altered.
Edit initialization parameter fle for Far Sync Instance. The important pa-
rameters are control_fles, db_unique_name and log_fle_name_convert.
So, db_unique_name must be diferent from db_unique_name of primary
or standby database(s) in Data Guard confguration. We are not setting
db_fle_name_convert parameter, because Far Sync instance is not using
data fles.
prmfs.__data_transfer_cache_size=0
prmfs.__db_cache_size=318767104
prmfs.__java_pool_size=4194304
prmfs.__large_pool_size=8388608
ORACLE DATA GUARD 12C:
NEW FEATURES
8/14 Mahir M Quluzade
198 OTech Magazine #3 May 2014
------- --------------- ------------
PRMCDB prmfs FAR SYNC
4. Copy password fle for Far Sync instance from primary database password fle. Pass-
word fle must be the same for every databases of Data Guard confguration; also far
sync instances.
[oracle@oel62-prmdb-12c ~]$ cd $ORACLE_HOME/dbs
[oracle@oel62-prmdb-12c dbs]$ cp orapwprmcdb orapwprmfs
5. Add Far Sync instance network service name to tnsnames.ora on both sides (primary
and standby) as below:
PRMFS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = oel62-prmdb-12c.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prmfs)
)
)
Now we can add Far Sync instance (prmfs) to our data guard confguration with DGMGRL.
Connect to DGMGRL as SYSDG.
DGMGRL> ADD FAR_SYNC prmfs AS CONNECT IDENTIFIER IS prmfs;
far sync instance prmfs added
DGMGRL> ENABLE FAR_SYNC prmfs;
Enabled
DGMGRL> show confguration;
Confguration - dg
Protection Mode: MaxPerformance
Databases:
Note: If standby redo logs (SRLs) preconfgured on primary database
before creation of standby database, then Far Sync Instance will create
SRLs automatically, otherwise you must add SRLs to Far Sync instance
manually.
3. Start Far Sync instance. Far Sync instance opening every time mount
mode.
[oracle@oel62-prmdb-12c ~]$ export ORACLE_SID=prmfs
[oracle@oel62-prmdb-12c ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Feb 12 17:10:24 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> create spfle from pfle=/u01/prmfs_pfle.ora;
File created.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 801701888 bytes
Fixed Size 2293496 bytes
Variable Size 545259784 bytes
Database Buffers 251658240 bytes
Redo Buffers 2490368 bytes
Database mounted.
SQL> select name, db_unique_name, database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
ORACLE DATA GUARD 12C:
NEW FEATURES
9/14 Mahir M Quluzade
199 OTech Magazine #3 May 2014
Succeeded.
DGMGRL> show confguration
Confguration - dg
Protection Mode: MaxAvailability
Databases:
prmcdb - Primary database
prmfs - Far Sync
stbcdb - Physical standby database
Fast-Start Failover: DISABLED
Confguration Status:
SUCCESS
We can see changes on data guard related initialization parameters on
primary, standby and far sync instance after adding Far Sync instance in
Data Guard confguration. Changes are as below:
On Primary database:
SQL> select name, value from v$parameter where name in
2 (fal_server,log_archive_confg,log_archive_dest_2);
NAME VALUE
---------------------- -------------------------------------
log_archive_dest_2 service=prmfs, SYNC AFFIRM db_unique_name=prmfs
valid_for=(online_logfle,all_roles)
fal_server
log_archive_confg dg_confg=(prmfs,prmcdb,stbcdb)
prmcdb - Primary database
stbcdb - Physical standby database
prmfs - Far Sync (inactive)
Fast-Start Failover: DISABLED
Confguration Status:
SUCCESS
For confguring Far Sync instance in Data Guard confguration, we must
change new property RedoRoutes of primary database and Far Sync
instance. This property changes LOG_ARCHIVE_DEST_n initialization
parameter for confguration of Far Sync instance. RedoRoutes property is
also used for confguring Cascaded Redo Transport Destinations. [9]
Note: RedoRoutes property has been confgured with a redo transport
mode, then the mode specifed by that RedoRoutes property value over-
rides the value of the LogXptMode property. The optional redo transport
attribute specifes the redo transport mode to use to send redo to the
associated destination. It can have one of three values: [ASYNC | SYNC |
FASTSYNC]. If the redo transport attribute is not specifed, then the redo
transport mode used will be the one specifed by the LogXptMode prop-
erty for the destination.
DGMGRL> EDIT DATABASE prmcdb SET PROPERTY RedoRoutes=(LOCAL : prmfs SYNC);
Property RedoRoutes updated
DGMGRL>EDIT FAR_SYNC prmfs SET PROPERTY RedoRoutes=(prmcdb : stbcdb ASYNC);
Property RedoRoutes updated
DGMGRL>EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
ORACLE DATA GUARD 12C:
NEW FEATURES
10/14 Mahir M Quluzade
200 OTech Magazine #3 May 2014
Cascading standby database has restrictions. So only physical standby
databases can cascade redo, non-real-time cascading is supported on
destinations 1 through 10 only. Real-time cascading is supported on all
destinations. Real-time cascading requires a license for the Oracle Active
Data Guard option.
Figure 6: Real-Time Cascade
With Oracle Database 12c Data Guard Broker is able to manage cascade
destination standby database. For confguring Real-Time Cascade with
DGMGRL, we must change RedoRoutes property. The ASYNC redo trans-
On Far Sync Instance:
SQL> select name, value from v$parameter where name in
2 (fal_server,log_archive_confg,log_archive_dest_2);
NAME VALUE
------------------ ---------------------------------------
log_archive_dest_2 service=stbcdb, ASYNC NOAFFIRM db_unique_name=stbcdb
valid_for=(standby_logfle,all_roles)
fal_server prmcdb, stbcdb
log_archive_confg dg_confg=(prmfs,prmcdb,stbcdb)
On Standby database:
SQL> select name, value from v$parameter where name in
2 (fal_server,log_archive_confg,log_archive_dest_2);
NAME VALUE
------------------ ---------------------------------------
log_archive_dest_2
fal_server prmfs, prmcdb
log_archive_confg dg_confg=(stbcdb,prmcdb,prmfs)
Note: If you not using broker-managed Data Guard confguration, then
you must change appropriate parameters with SQL statement; ALTER
SYSTEM SET.
Real-Time Cascade
With Oracle Database 12c, a cascading standby database can either cas-
cade redo in real-time (as it is being written to the standby redo log fle)
or non-real-time (as complete standby redo log fles are being archived on
the cascading standby).
ORACLE DATA GUARD 12C:
NEW FEATURES
11/14 Mahir M Quluzade
201 OTech Magazine #3 May 2014
To enable temporary undo on the primary database, use the TEMP_
UNDO_ENABLED initialization parameter. On an Oracle Active Data Guard
standby, temporary undo is always enabled by default so the TEMP_
UNDO_ENABLED parameter has no efect.
Note: The temporary undo feature requires that the database initializa-
tion parameter COMPATIBLE be set to 12.0.0 or higher. The temporary
undo feature on Oracle Active Data Guard instances does not support
temporary BLOBs or temporary CLOBs.
Sequences in Active Data Guard
With Oracle Database 12c in an Oracle Active Data Guard environment, if
sequences are created by the primary database with the default CACHE
and NOORDER options, then standby databases can use this sequence.
Otherwise, the sequence created with the NOORDER or NOCACHE
options, then Oracle Active Data Guard cannot use this sequences. In
this case, the primary database ensures that each range request from a
standby database gets a range of sequence numbers that do not overlap
with the ones previously allocated for both the primary and standby da-
tabases. This generates a unique stream of sequence numbers across the
entire Oracle Data Guard confguration.
If you want, standby databases to get all of range of sequences then
you must create sequences with SESSION option by the primary data-
base. A session sequence is a special type of sequence that is specifcally
port attribute must be explicitly specifed for a cascaded destination to
enable real-time cascading to that destination. [10]
DMLs on Oracle Active Data Guard
Oracle Active Data Guard is introduced to Oracle Database 11g. So, in
Oracle Active Data Guard we can open standby database in READ ONLY
mode. The main purpose of using Oracle Active Data Guard standbys is to
read-mostly reporting applications. But sometimes reporting applications
need global temporary tables for storing temporary data. Prior to Oracle
Database 12c global temporary tables could not be used on Oracle Active
Data Guard standbys, where are read-only.
With Oracle Database 12 global temporary tables can be used on Active
Data Guard standbys. However, as of Oracle Database 12c the temporary
undo feature allows to undo the changes to a global temporary table
they are stored in the temporary tablespace as opposed to the undo
tablespace. Undo stored in the temporary tablespace does not generate
redo, thus enabling redo-less changes to global temporary tables. This
allows DML operations on global temporary tables on Oracle Active Data
Guard standbys. When temporary undo is enabled on the primary data-
base, undo for changes to a global temporary table are not logged in the
redo and thus, the primary database generates less redo. Therefore, the
amount of redo that Oracle Data Guard must ship to the standby is also
reduced, thereby reducing network bandwidth consumption and storage
consumption.
ORACLE DATA GUARD 12C:
NEW FEATURES
12/14 Mahir M Quluzade
202 OTech Magazine #3 May 2014
Minimizing risk: All changes are implemented and thoroughly tested at
the standby database with zero risk for users running on the production
version. Also Oracle Real Application Testing enables real application
workload to be captured on the production system and replayed on the
standby for the most accurate possible test result.
The Rolling Upgrade Using Oracle Active Data Guard feature, new as of
Oracle Database 12c, provides a streamlined method of performing roll-
ing upgrades. It is implemented using the new DBMS_ROLLING PL/SQL
package, which allows you to upgrade the database software in an Oracle
Data Guard confguration in a rolling fashion.
Figure 7: Rolling Upgrade steps
Database Rolling Upgrades using Active Data Guard can be used for
version upgrades starting with the frst patchset of Oracle Database 12.
This means that the manual procedure included with Data Guard and
designed to be used with global temporary tables that have session
visibility. Unlike the existing regular sequences (referred to as global
sequences for the sake of comparison), a session sequence returns a
unique range of sequence numbers only within a session, but not across
sessions. Another diference is that session sequences are not persistent.
If a session goes away, so does the state of the session sequences that
were accessed during the session. [11]
Database Rolling Upgrade using Active Data Guard
More companies are placing increasing priority on reducing planned
downtime and risk when introducing change to a mission critical produc-
tion environment. With Oracle Database 12c Database rolling upgrades
provide two advantages: [12]
Minimizing downtime: Database upgrades and alterations of the physi-
cal structure of a database (other than changing the actual structure of a
user table), can be implemented at the standby while production contin-
ues to run at the primary database. If all changes have been validated, a
switchover moves the production applications to the standby database.
It means the original primary will be upgraded while users run on the new
version. Total planned downtime is limited to the brief time required to
switch production to the standby.
ORACLE DATA GUARD 12C:
NEW FEATURES
13/14 Mahir M Quluzade
203 OTech Magazine #3 May 2014
described earlier in this paper must still be used for rolling upgrades from
Oracle Database 11to Oracle Database 12, or when upgrading from the ini-
tial Oracle Database 12release to the frst patchset of Oracle Database 12.
Conclusion
Oracle Data Guard is the disaster recovery solution for Oracle Databases.
With this new features Oracle expanded capabilities of Oracle Data
Guard. So this new features increases protection you production data-
base and increases using of Active Data Guard.
References:
Oracle Data Guard Broker 12c Release 1 (12.1)
Oracle Data Guard Concepts and Administration 12c Release 1 (12.1)
ORACLE DATA GUARD 12C:
NEW FEATURES
14/14 Mahir M Quluzade
204 OTech Magazine #3 May 2014
INTRODUCTION
TO ORACLE
TECHNOLOGY
LICENSE
AUDITING
Peter Lorenzen
www.cgi.dk

twitter.com/theheatDK

http://www.linkedin.
com/in/peterlorenzendk
205 OTech Magazine #3 May 2014
It is important to keep track of which Oracle software you have installed
to ensure license compliance. Licenses can be expensive and when you
buy software from Oracle, you automatically accept that Oracle can drop
by for an audit.
There are many reasons for a lack of compliance; but ultimately, it does
not matter. If somebody forgets to delete an installation or adds an ex-
tract CPU to a server without buying extra licenses, you have a problem.
Therefore, it is a good idea to do your own audit once in a while.
First, you need to understand Oracles license vocabulary and compo-
nents. Oracle License Management Services (LMS) is a good place to
start (http://goo.gl/W4ICVM). These are the people that will visit you if
Oracle decides to do an audit. They have created a Software Investment
Guide (http://goo.gl/dBysjh) that will introduce you to Oracle licensing.
License agreements
When you buy an Oracle license, you sign a contact with Oracle. A part of
this agreement includes a document describing all the nitty-gritty license
rules. These are the rules you have to be compliant with. Although Oracle
continuously changes the rules, you only have to worry about the rules in
your contract.

The rules are called the License Defnitions and Rules (LDR). The LDR
will often be included in an Oracle Master Agreement (OMA), or a
Transactional Oracle Master Agreement (TOMA). An OMA, previously
known as an OLSA, can be running for several years and in this case, the
LDR will be in a separate document.
If you are an Oracle Partner, you can fnd the current LDR here
(EMEA - http://goo.gl/lbcC0A).
The LDR does not contain all licensing details. This page (http://goo.
gl/86f3je) lists some documents that are referenced from the OMA/LDR.
An example is the Oracle Processor Core Factor Table (http://goo.gl/
l7D8hn).
License documentation
If you are an Oracle Partner, you have access to other documents that
can help you with regard to licensing. The Oracle Technology Global Price
List Supplement (http://goo.gl/LD8eSN) will tell you exactly which prod-
ucts/components are included with a specifc license, and which products
must be licensed separately as a prerequisite.
The Oracle Technology Global Price List (http://goo.gl/LD8eSN) contains
footnotes called Oracle Technology Notes. They contain the same rules as
the LDR, but are organized diferently and are easier to read.
Oracle has some much debated rules regarding server virtualization/par-
titioning and licensing. Partitioning is not mentioned in the LDR, so you
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING
1/5 Peter Lorenzen
206 OTech Magazine #3 May 2014
CPU has been added to a server, an extra processor license or NUP li-
cense is probably needed. NUP licenses often have a minimum number of
NUPs per processor. For example, the Oracle database Enterprise Edition
has a minimum of 25 users per processor. If you add an extra CPU with
4 cores and the CPU has a core factor of 0.5, you need a minimum of 50
extra NUPs.
As a side note please note that Oracle ignores hyper-threading and only
counts physical cores.
Auditing the installed software
There is no easy way to audit the installed software. Oracle does not sup-
ply any tools for this, so you have to do it manually, which can be time
consuming.
You can buy third-party tools that gather the needed data from the serv-
ers, but I have never used any of them. Please note that LMS has only
verifed some of the vendors to produce the right data.
Now lets say you want to start by looking at your database servers. Most
of Oracles products use the Oracle Universal Installer (OUI), and this will
by default create a Central Inventory of all the software that has been
installed with the OUI. You can see how to locate the inventory here,
(http://goo.gl/XMgOYg).
need licenses for all the hardware where the Oracle software is installed
and/or running. This means that contractually you are bound to get licens-
es for the full hypervisor. Oracle has then created a policy document that
describes some situations where you do not need to license the hypervi-
sor, but only the guest machine/server. You can read about partitioning
here http://goo.gl/SG45Mv. This document is available to the general
public.
Track your software assets
You need to document your licenses somewhere. Oracle has a spread-
sheet you can use (http://goo.gl/XXpa3D). It is a bit rudimentary, but it is
a start if you do not already have a way to document this.
Oracle ofers both term and perpetual licenses. If you use a term license,
remember to add the start/end date to the documentation.
You can fnd much more comprehensive documentation examples via
Google. Here is one http://goo.gl/giSgUR.
License Metrics
Oracle uses many diferent license metrics. For technology licenses, the
most used metrics are Processor and Named User Plus (NUP). NUPs are
users authorized to use the software, both human and non-human.
Hardware assets must be consolidated with software assets. If an extra
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING
2/5 Peter Lorenzen
207 OTech Magazine #3 May 2014
You can buy options and management packs for the database (http://
goo.gl/ee6CMn). To fgure out which options and management packs
you are using, Oracle has created two scripts. They can be downloaded
from MOS - Database Options/Management Packs Usage Reporting for
Oracle Database 11g Release 2 (Doc ID 1317265.1).
option_usage.sql:
used_options_details.sql:
The scripts are created for 11g but they work just as well for 12c. The 12c
documentation also mentions them (http://goo.gl/JUHWXR).
The inventory will lead you to the Oracle Homes on the server. Once they
are located, you can use the opatch tool to list the products installed in
an Oracle Home.
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
cd $ORACLE_HOME
OPatch/opatch lsinventory

Oracle Database 12c 12.1.0.1.0


There are 1 products installed in this Oracle Home.

Next we need to fnd out which edition this 12c database installation is. If
you log in to a database via SQL*Plus you will see a banner. Here are two
examples:
Enterprise Edition
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Produc-
tion
Standard Edition or Standard Edition One
Oracle Database 11g Release 11.2.0.3.0 - 64bit
Production
Unfortunately, Standard Edition and Standard Edition One are just license
options and use the same software, making it difcult to know which is
installed. You can see it in the OUI install log, if it has not been deleted.
For more information check MOS note titled How to fnd if the Database
Installed is Standard One Edition? (Doc ID 1341744.1)
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING
3/5 Peter Lorenzen
208 OTech Magazine #3 May 2014
The WebLogic server can be licensed as the below three products.
Product Comments
Standard
Edition
Does not include Clustering.
Includes: TopLink, ADF, Web Tier, Java SE.
Enterprise
Edition
Includes:
TopLink, ADF, Web Tier, Java SE Advanced, Virtual
Assembly Builder, WebLogic Software Kit for ODA.
Suite Prerequisite for options like OSB, SOA Suit etc.
Includes:
WebLogic Server Enterprise Edition, Java SE Suite,
IAS Enterprise Edition, Coherence Enterprise Edition.
Restricted-use: Management Pack for Oracle Coherence.
Coherence is distributed with the WebLogic installer, but you need a We-
bLogic Suite license or a specifc Coherence license to be allowed to use
it. To check if it is installed:
cd /u01/app/oracle/product/adf12c/oui/bin
./viewInventory.sh -jreLoc /u01/app/oracle/product/java_current/jre \
-oracle_home /u01/app/oracle/product/adf12c -output_format report \
| grep oracle.coherence
Component: oracle.coherence 12.1.2.0.0
WebLogic Server Standard Edition does not include clustering. To check
if clustering is used; you can use the Admin Console or have a look in DO-
MAIN_HOME/confg/confg.xml.
The Oracle database has a built in view that can help you if you use NUP
licensing. Have a look at v$license (http://goo.gl/JmgTyv).
Moving on to Fusion Middleware (FMW) software, here it is also a good
idea to start by looking in the OUI inventory, and then fnd the products
using opatch.
export ORACLE_HOME=/u01/app/oracle/product/forms112/fr_binaries
cd $ORACLE_HOME
OPatch/opatch lsinventory

Oracle Forms and Reports 11g 11.1.2.2.0


There are 1 products installed in this Oracle Home.

Not all products use the OUI. An example is the WebLogic Server before
version 12.1.2. The older releases of the WebLogic Server maintained a
beahomelist fle that lists all the directories where WebLogic are installed.
beahomelist location:
(UNIX) /home/oracle/bea/beahomelist
(Windows) C:\bea\beahomelist
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING
4/5 Peter Lorenzen
209 OTech Magazine #3 May 2014
WebLogic Suite is the most complete WebLogic license. You need this if
you want to buy options like OSB, SOA Suite etc.
Besides the three standard WebLogic Server licenses, there is also a
restricted-use license called WebLogic Server Basic (http://goo.gl/pikP1k).
If you for example buy a Forms and Reports license, it includes a restrict-
ed-use license for WebLogic. There are a lot of things you are not allowed
to do when you use WebLogic Server Basic. The restrictions take up
around 7 pages in the documentation. Oracle has created a WLST script
that can verify most of these rules. You can get it from MOS note - We-
bLogic Server Basic License Feature Usage Measurement Script (Doc ID
885587.1). It is very simple to use.
The tip of the iceberg
This article has just scratched the surface of licensing and auditing. It is a
complicated, but important subject. You can avoid a lot of trouble by be-
ing proactive and knowing how Oracle licensing works, and exactly what
software is installed on which hardware.
INTRODUCTION TO ORACLE
TECHNOLOGY LICENSE AUDITING
5/5 Peter Lorenzen
210 OTech Magazine #3 May 2014
OTech Magazine
OTech Magazine is an independent magazine for Oracle profession-
als. OTech Magazines goal is to ofer a clear perspective on Oracle
technologies and the way they are put into action. OTech Magazine
publishes news stories, credible rumors and how-tos covering a
variety of topics. As a trusted technology magazine, OTech Magazine
provides opinion and analysis on the news in addition to the facts.
OTech Magazine is a trusted source for news, information and
analysis about Oracle and its products. Our readership is made up of
professionals who work with Oracle and Oracle related technologies
on a daily basis, in addition we cover topics relevant to niches like
software architects, developers, designers and others.
OTech Magazines writers are considered the top of the Oracle
professionals in the world. Only selected and high-quality articles
will make the magazine. Our editors are trusted worldwide for their
knowledge in the Oracle feld.
OTech Magazine will be published four times a year, every season
once. In the fast, internet driven world its hard to keep track of
whats important and whats not. OTech Magazine will help the
Oracle professional keep focus.
OTech Magazine will always be available free of charge. Therefore
the digital edition of the magazine will be published on the web.
OTech Magazine is an initiative of Douwe Pieter van den Bos. Please
note our terms and our privacy policy at www.otechmag.com.
Independence
OTech Magazine is an independent magazine. We are not afli-
ated, associated, authorized, endorsed by, or in any way ofcially
connected with The Oracle Corporation or any of its subsidiaries or
its afliates. The ofcial Oracle web site is available at www.oracle.
com. All Oracle software, logos etc. are registered trademarks of the
Oracle Corporation. All other company and product names are trade-
marks or registered trademarks of their respective companies.
In other words: we are not Oracle, Oracle
is Oracle. We are OTech Magazine.
Authors
Why would you like to be published in OTech Magazine?
- Credibility. OTech Magazine only publishes stories of the best-of-
thebest of the Oracle technology professionals. Therefore, if you
publish with us, you are the best-of-the-best.
- Quality. Only selected articles make it to OTech Magazine. There-
fore, your article must be of high quality.
- Reach. Our readers are highly interested in in the opinion of the
best Oracle professionals in the world. And all around the world
are our readers. They will appreciate your views.
OTech Magazine is always looking for the best of the best of the Ora-
cle technology professionals to write articles. Because we only want
to ofer high-quality information, background stories, best-practices
or how-tos to our readers we also need the best of the best. Do you
want to be part of the select few who write for OTech Magazine?
Review our writers guidelines and submit a proposal today at www.
otechmag.com.
Advertisement
In this frst issue of OTech Magazine there are no advertisements
placed. For now, this was solely a hobby-project. In the future, to
make sure the digital edition of OTech Magazine will still be available
free of charge, we will add advertisements. Are you willing to partici-
pate with us? Contact us on www.otechmag.com or +31614914343.
Intellectual Property
OTech Magazine and otechmag.com are trademarks that you may
not use without written permission of OTech Magazine.
The contents of otechmag.com and each issue of OTech Magazine,
including all text and photography, are the intellectual property of
OTech Magazine.
You may retrieve and display content from this website on a com-
puter screen, print individual pages on paper (but not photocopy
them), and store such pages in electronic form on disk (but not on
any server or other storage device connected to a network) for your
own personal, non-commercial use. You may not make commercial
or other unauthorized use, by publication, distribution, or perfor-
mance without the permission of OTech Magazine. To do so without
permission is a violation of copyright law.
All content is the sole responsibility of the authors. This includes all
text and images. Although OTech Magazine does its best to prevent
copyright violations, we cannot be held responsible for infringement
of any rights whatsoever. The opinions stated by authors are their
own and cannot be related in any way to OTech Magazine.
Programs and Code Samples
OTech Magazine and otechmag.com could contain technical inac-
curacies or typographical errors. Also, illustrations contained herein
may show prototype equipment. Your system confguration may dif-
fer slightly. The website and magazine contains small programs and
code samples that are furnished as simple examples to provide an
illustration. These examples have not been thoroughly tested under
all conditions. otechmag.com, therefore, cannot guarantee or imply
reliability, serviceability or function of these programs and code sam-
ples. All programs and code samples contained herein are provided
to you AS IS. IMPLIED WARRANTIES OF MERCHANTABILITY, NON-
INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE ARE
EXPRESSLY DISCLAIMED.
OTECH MAGAZINE
See you in
the summer...

Vous aimerez peut-être aussi