Vous êtes sur la page 1sur 15

Administering Oracle Financials 101

by Anthony Pennington and Brian Crowley Pennacle Consulting, LLC Introduction


This presentation will be valuable to the beginning or intermediate Oracle Database Administrator or Application System Administrator who is responsible for maintaining an Oracle Financials environment. The intent is to give a comprehensive view of the Oracle Applications environment, explain away some of the mystery and magic, and convey the makeup of an Oracle "Fin-Apps" DBA. The presentation covers the basic Oracle Financials environment setup, tools and utilities, and operational background. Guidelines and tips will be emphasized for managing the concurrent managers, performance, architecture, plus some installation/upgrade/customization issues, too. The Oracle Application product versions available today include both character and GUI mode; the GUI modes are Smart Client and NCA. For the purposes of this presentation, it is assumed that the majority of the audience has the most common experience with the character based mode applications and is looking to upgrade to NCA. Many of these concepts and ideas are applicable to any of these environments. Hence, unless otherwise stated, these ideas will be presented from a character based installation point-of-view.

What is an Oracle "Fin-Apps" DBA?


The most preferable profile for this position is an experienced Oracle DBA who will now learn the tools and architecture of an Oracle Financials environment. These additional skills include the installation and maintenance of the Oracle Applications software, familiarity with the adadmin tools and utilities, management of the concurrent managers, and performing tasks associated with the SYSADMIN responsibility. Someone who knows or can learn their particular industry's business and workflow will be even more desirable because this will enable them to more productively assist the end-users and developers in troubleshooting problems, customizing the applications, and correcting data problems. The Fin-Apps DBA should also possess good people skills. This person often becomes the central focal point of your Oracle Financials environment -- especially if this person is handling the application SYSADMIN responsibility. They must interface with the IT operations staff, developers, and the end-users in either technical or functional discussions. Unless you have a mature help-desk facility, your Oracle Fin-Apps DBA often becomes the first line of defense in fielding problem and request calls. Make sure they have the personality to handle this.

What is an Oracle Applications System Administrator?


While this question could be a paper all by itself, simply put, the applications System Administrator (not to be confused with a Unix or NT system administrator) is the account manager for the Oracle Applications. The Oracle Fin-Apps DBA will need to login in as SYSADMIN, a highly privileged responsibility that is necessary when administering the applications. The SYSADMIN responsibility is very much analogous to an Oracle database administrator. An Oracle DBA can setup up users (schemas) within the database, control their passwords, quotas, access, dbms_jobs, etc. The SYSADMIN controls accounts, access, security, and scheduling of jobs within the applications (not the database) via user and password setup, responsibility assignments, and concurrent manager and job setups. The Fin-Apps DBA needs to have these SYSADMIN skills. In many shops, the role of SYSADMIN is often dedicated to another person or group. Here's where the sticky political part can come into play because this role has both technical and functional aspects. The age-old debate is whether the SYSADMIN should be filled by an IT person for its technical aspects (such as printer control setups) versus an end-user (usually a business content area manager) who knows the business applications, needs, and job assignments. After all, the application is for the endusers use. Either way, the position needs to integrate tightly between IT and users. Policies and procedures can help immensely in these gray areas.

Copyright 1999 Pennacle Consulting, LLC

Application Software and Architecture


The installation of the Oracle Application Software is a very tedious process which is out of scope for this presentation. Oracle provides several detailed instruction manuals (which you should read, regardless) and utilities to facilitate the software installation. We will touch upon some of those utilities, later. I want to give a brief overview of the software directory and file structures so that we can drill down into areas that you need to become familiar with.

A Look at $APPL_TOP
The Oracle Application software is installed in one common area in a subdirectory tree fashion. The top directory for this repository is defined as $APPL_TOP. All other subdirectories will key off of $APPL_TOP. For each product module that you have installed, there will be another $<product>_TOP variable defined. For example, if you have Accounts Payable, General Ledger, and Purchasing, then you will have a $AP_TOP, $GL_TOP, and $PO_TOP. Underneath these directories, you will notice a common directory tree including /bin, /forms, /lib, /sql, /install, /srw, etc. Each directory has its own unique use, and is similar for every product top. You should become familiar with these standards and the contents of these directories. Note that there exists other "tops" not associated with the product modules that your site may use. These are either "shared" products or supporting products that Oracle installs for you. We will explain a few of them in more detail. Note the file called APPLSYS.env under $APPL_TOP. This is one of the environment files that are sourced in to define your environment variables and directories necessary for your applications. It will define $APPL_TOP, your product tops, and other variables. We will discuss some of the "output" directory variables in a moment. A couple of other directories worth noting under $APPL_TOP include $APPL_TOP/install and $APPL_TOP/patch. The /install subdirectory holds many of the utilities used for the initial installation. It also has a /log subdirectory where installation, patches, and other adadmin utility log files will go. You will visit this directory often. Typically, you will have $APPL_TOP/patch for holding patches that Oracle Support may send you (will send you). You can put your patches in another directory, but this is the default. You will become very familiar with patches, too.

A Look at $FND_TOP
Another top that you will find is $FND_TOP. When Oracle started writing the financial applications software, they developed a core set of utilities and objects called Application Object Library (AOL). From these objects, they wrote the "foundation" for the Oracle Applications, referred to as Application Foundation. This foundation code is stored under $FND_TOP. As product modules developed, they hooked these into the Application Foundation infrastructure utilizing the AOL objects. Examples of these AOL objects and FND products include the concurrent managers, quick picks, zooms, etc. Notice that $FND_TOP has a very similar directory tree to the other product modules. You may have noticed that when you fire up the applications, that you call the script file "found" (short for Application Foundation) which executes the $FND_TOP/bin/aiap executable and passes the username/password stored in the variable $GWYUID (typically applsyspub/pub) to get you to your initial login screen. See, its not magic -- just code. Many of the topics that we will cover, especially the Concurrent Managers, are found in $FND_TOP.

A Look at $AD_TOP
Most of the other utilities used by the Oracle Fin-Apps DBA and which we will discuss in detail are found in $AD_TOP. Of particular interest are the /bin and /sql subdirectories. You will find the following executables in $AD_TOP/bin: adaimgr (autoinstall for the installation or upgrade of the software), adpatch for administering patches, and adadmin which is a menu driven utility for maintaining both the Oracle Applications database and software. Many of these utilities in turn call other $AD_TOP utilities.

Output Directories of the Oracle Applications


There are several directories where output is written. These directories require routine cleanup and maintenance. The jobs that are run from the concurrent managers create both log and output files (reports). The location of these

Copyright 1999 Pennacle Consulting, LLC

files depends upon the variables $APPLCSF, $APPLLOG, $APPLOUT, and $*TMP. The $APPLLOG and $APPLOUT variables are typically set to "/log" and "/out", respectively, but they can be set to other values. The location of these subdirectories depends upon the value of $APPLCSF. If $APPLCSF is set to a directory, then all of the product modules (AR, AP, PO, etc.) output will go to a common "log" or "out" directory. The typical setting, though, is to not have $APPLCSF set to any value. When this is true, then the output for the product modules defaults to the specified "log" and "out" directories under the corresponding product module top. For example, concurrent manager jobs run from an Account Receivables responsibility would find the logs and data output in $AR_TOP/log and $AR_TOP/out. I would advise you to not set $APPLCSF. This way, you can more easily find and categorize your output. There is generally a lot of output, anyway, and you can stress the inodes by having too many files. Be sure that your temporary directories, such as $APPLTMP or $REPTMP get cleaned up and don't fill up a file system. Note that any SYSADMIN responsibility output will go in $FND_TOP/log or $FND_TOP/out.

The Concurrent Managers


One of the most attractive features of the Oracle Application Software is the concurrent manager. Basically, this is a batch job scheduling system for running reports and background tasks. From the concurrent managers you can manage queues, workshifts, access and security, job priorities, job log and output, job automation, and assorted job compatibility (or incompatibility) rules. This feature is one of the key areas that can consume much of the Oracle Fin-Apps DBA/SYSADMIN time. To find more complete instructions on how to setup and use the concurrent managers and the jobs that they run, refer to the AOL Reference Manual. For the purposes of this presentation, we will discuss major concepts in setting up the managers, performance issues, and other general tips and suggestions.

Basic Tuning of the Concurrent Manager


We go back to the age old concepts of computer tuning and balance loading for OLTP versus Batch Processing. OLTP (on-line transaction processing, or "real-time" computing) is where you have end-users doing their work on the screen needing quick, real-time results -- especially if they are servicing clients in person or on the phone. These requests need to be completed as soon as possible as to not disrupt the business and revenue flow! An example of these transactions may be your Order Entry people in customer services. Note: Just because an on-line transaction submits a job to the concurrent manager (or the "background") that this does not necessarily qualify it as a "batch-processing" job. On the other hand, batch-type jobs can afford to be completed at a later time than when initially entered. They usually can be grouped together (batched) and processed outside of the normal business hours. Examples of these type of reports could be financial reports, summary reports, end-of-day processing, etc. Some jobs are required to assist the on-line transaction processing but can be batched (like a sales forecast or open ticket report) but needs to be completed prior to the days activities, rather than after. You may be in a 7x24 shop where OLTP is always a priority. Balancing your OLTP versus batch jobs may be a little more complicated. Still, your objective is to reduce the impact of the non-critical resource hungry jobs against the OLTP transactions. The batch jobs will just have to work when OLTP demands drop. You do this by managing queues, workshifts, priorities, incompatibility rules, and . . . end-user training or awareness. This end-user awareness and training is perhaps one of the most neglected areas, yet is so important. In order to determine which jobs can be truly classified as OLTP (real-time critical) versus batch is going to require interviews with your end-users and/or business systems analysts. One of the most common problems that I have observed is that sites pretty much leave the standard and default queues created by the installation process. Then, the jobs go into the queue and operate on a first-come first-serve basis. This will not give you the results you need.

Copyright 1999 Pennacle Consulting, LLC

Tips and Techniques for Concurrent Manager Management


The right answers will depend upon the results of your interviews and trial-and-error, but here are some basic ideas that some sites use. Create queues based upon the duration of a job, such as FAST versus SLOW. The FAST queue usually handles jobs that complete in within a minute and the concurrency (number of jobs that can run concurrently in the same queue) and priority is high, where the opposite criteria is held for the SLOW queue. Another technique is to setup OLTP versus BATCH queues where the workshift for OLTP is setup for prime-time business hours and BATCH for non business hours. Setting up queues for workshifts, functionality, and departments are more examples, but certainly not all of your options. I tend to favor a combination of OLTP versus BATCH functionality. By combining queues and their workshifts, concurrency, and incompatibility rules, you should strive to get the maximum throughput possible for OLTP and convince users that batch jobs which are needed for next-day activities should be moved to off-hours processing and set with lower priorities.

Starting and Stopping the Concurrent Managers


While you can start the concurrent managers within the applications, I dislike a couple of the defaults. 1) The default pmon time is 60 seconds. My clients usually need this time to be sooner, like 30, 20, or 10 seconds. 2) I do not like the default name of std.mgr for the internal manager. I prefer that it has the name of the instance. You can overcome these defaults by scripting the start and shut commands with different parameters. Besides, it is very useful for starting or shutting down the concurrent managers from the command line -- especially in the .rc Unix scripts. Example script for starting the managers: #strmgr.sh date echo "Executing strmgr.sh script ..." echo "Starting Concurrent Managers ..." startmgr sysmgr="apps/fnd" mgrname=prd sleep=20 #exit Actually, I would advise you to use symbolic parameters for the APPS password instead of hard coding it. The "sleep" parameter tells the internal manager to search fnd_requests every 20 seconds for new requests, rather than the 60 second default. The internal log file will be called prd.mgr (typically found in $FND_TOP/log). There are other parameters available, too, such as the debug option. Consult your manual for more details. Example script for stopping the managers: #stopmgr.sh date echo 'Stopping Concurrent Managers ...' #The following is one command line $FND_TOP/bin/CONCSUB apps/fnd SYSADMIN 'System Administrator' SYSADMIN WAIT=Y CONCURRENT FND DEACTIVATE #End of command line ps -ef | grep LIBR date echo 'Concurrent Managers Stopped' exit 0 Notice that stopmgr.sh does not run a command line executable to directly stop the managers. Instead, it submits a concurrent job via the concsub utility. The WAIT parameter tells the job not to process any further until all the managers have shutdown before proceeding and eventually exiting the script.

Copyright 1999 Pennacle Consulting, LLC

Debugging Concurrent Manager Errors


Look for errors in the logs. The internal manager's log file will usually be in $FND_TOP/log (see previous discussion on defining log and out directories) defaulting to std.mgr or named as you specified in the command line parameter, mgrname=<name>. The internal manager monitors the other queue managers. You will see the startup, shutdown, print requests, and other information in this log. You may also find errors as to why the internal or subsequent slave managers could not start. All of the other managers have dedicated logs, too. They are preceded with a "w" or "t" followed by an identity number, such as w139763.mgr. Each queue will have one of these log files. You can see individual jobs and associated request ids in each of these files. You can review error messages, too. Occasionally, a job will fail and take the manager down with it. The internal manager will sense that the queue is down and restart it on the next pmon cycle. Suggestion: We will discuss purging of the fnd_concurrent_request table and associated log and output files, later, but I would make this suggestion. Purge these manager files frequently (daily) so that you can easily perform a search on "error" when trying to debug concurrent manager errors.

Kick Starting Dead Managers


Sometimes you may encounter difficulty in starting either the internal concurrent manager or the other slave queues. Consult the log files for error messages and take appropriate action to resolve the problem. If you are unsuccessful, then enter the "verify" command in the concurrent manager screen to force the internal manager to read and initiate the target number of queues specified. If that doesn't work, try to deactivate or terminate the managers, then restart them. If you have trouble bringing them down, you may have to perform a "kill" on the background process. You can identify the managers with "ps -ef|grep LIBR" command. If you still encounter problems, make sure that there aren't any processes still tied to the managers. If you find any, kill them. If you still encounter problems, then the statuses are probably improperly set in the tables. For example: You may see the error in the internal std.mgr log file stating that it was unable to start because it has already started! You have verified that there are no "FNDLIBR" background processes. The problem is that the tables have improper statuses. You will have to clean up these tables. Here are some queries. I put them into scripts and keep them handy for when the time arises because the statuses are not that easy to remember.

Reset the concurrent queues: UPDATE fnd_concurrent_queues SET running_processes=0, max_processes=0; Remove any completed jobs: (optional) DELETE FROM fnd_concurrent_requests WHERE conc_process_status_code='C'; Set jobs with a status of Terminated to Completed with Error: (optional) UPDATE fnd_concurrent_requests SET status_code='E',phase_code='C' WHERE status_code='T'; Delete any current processes: DELETE FROM fnd_concurrent_processes;

I have listed these in descending order of frequency that I have had to use them. There is a paper available from Oracle Support which describes these and more.

Purging Concurrent Manager Logs and Output


The concurrent managers create several table entries and file output in the /log and /out directories. You should purge these frequently to reduce excessive table growth and fragmentation, and avoid performance degradation of the concurrent manager processes. You should also decrease the used space on your disks from old log and report files. This will also relieve stress on the inodes from a large number of files. Under SYSADMIN, setup a reoccurring report called "Purge Concurrent Request and/or Manager Data". There are several parameters, but I

Copyright 1999 Pennacle Consulting, LLC

typically try to setup two jobs. 1) One job for "Manager" data -- that's the concurrent manager log files typically found in $FND_TOP/log. I set the frequency to daily, and have it purge down to one day. 2) Another job for the "Request" data -- this is for all other modules outside of the SysAdmin responsibility, such as AR, PO, GL, etc. I typically try to keep only one week's worth of data out there on the system. Your needs and capacity may vary, so set accordingly. This purge process does two things: 1) Deletes rows from the fnd_concurrent_requests tables, and 2) Deletes both the log and output files from the associated $XX_TOP/log or /out directories. If for any reason the file delete did not complete, but the table data was purged, then you will need to manually purge the output files from the /log and /out directories. This can happen if the privileges were incorrectly set, or you replicated a copy of the production database to your development environment, or the file system was not mounted, etc.

Purge Signon Audit Data


This is another purge report, like above. Only this purges the signon audit data which records every login to the Oracle Applications. Set the frequency and retention equal to that of your request data purge.

Performance Tuning of Concurrent Manager Jobs


What has been described thus far is balancing job throughput. Yet, the jobs themselves may be in need of sql tuning or resolving problems in the database. We won't go into detail of sql tuning -- that is a typical skill set that should be handled by the IT staff. What I want to discuss here are ways of identifying and classifying problems within the Oracle Applications.

FND Tables Can Speak Volumes


The concurrent manager is just a scheduling system that keeps track of jobs, parameters, scheduling information, and completion statuses of every job submitted. By querying these tables, you can learn much about the patterns of your site, including performance trends. I strongly suggest that you become familiar with these tables and develop reports against these tables. Some of the most useful tables are the fnd_concurrent_% tables. Things to look for are which jobs are run, how many times executed, completion status (especially "errors"), and run times for these jobs.

Where Can I Get Help?


When it comes to looking for established help on tuning your concurrent manager jobs, there is an excellent reference that can never be exploited enough... the white paper on Managing the Concurrent Managers or ("How to Herd Cats") by Barbara Matthews. See proceeding papers from the OAUG Fall 1997 convention. This presentation has been very useful to me. I have modified several of these scripts to my clients' needs. My favorites are daily errors, daily and weekly hogs, the min/max reports, and the job schedule report (note that these are not the exact names that you'll find). Here are some ideas on how to use these reports. The daily errors report shows me every job that completed with an error status. I review these from time to time to look for trends. The error could be caused by a bug (so then you open a tar and look for an existing patch). The problem is usually attributed to user error, such as bad parameter input. But don't let the error go on -- it could be an indication that the user needs some training or other help (you'll know the user name because it provides the request id number that allows you to view all the details and log of the job -- if you haven't purged it, yet). The hog reports flag every job that exceeds some set time threshold (such as 20 minutes). It also sets a submission time range, such as weekdays 6:00 AM to 6:00 PM. The idea here is that we are looking for jobs with very lengthy completion times running during standard operating business hours (the prime OLTP window). If a job exceeds this limit, then it is taking resources away from your OLTP users and should either be 1) tuned to reduce execution time, or 2) moved to the "batch" processing window or queue during the off-hours.

Copyright 1999 Pennacle Consulting, LLC

Before you tune a "hog", I would suggest that you see if a performance patch has been issued on this program. Many times there is, and this can save you the trouble of tuning it -- and crossing that dilemma of introducing a customized piece of code into your environment. The min/max reports can be modified to sort the jobs in ascending or descending order based upon the execution time or number of times executed. This report takes some interpretative skills. For example, lets say that you identify the job that has the longest execution time... say 4 hours! At first glance, this looks like a sql tuning candidate. A closer look, though, reveals that the minimum time it took to run the job was only 2 minutes -- and that the average time for 300 submissions in one day was only 5 minutes! Now, what you have is some sort of exception. You should cross-reference this job to the "hogs" report -- it should be there. Or, see if it was in the errors. By finding the request id of this aberrant job you can review the details. You may find that the parameters specified a much larger data set, or was incorrect, or many other things. If you finally determine that the job was correctly submitted and that the rest of the evidence points to an optimized sql code set, then you have probably encountered a "non compatible" job! In other words, the job is fine by itself, but may suffer drastically due to contention with other jobs run at the same time. With more detective work, you should strive to find which jobs it is incompatible with and rearrange queues, priorities, or compatibility rules to ensure that they will not run simultaneously. The job schedule report shows all the scheduled jobs that keep submitting themselves, automatically. There are a few things I look for, here. One is the sheer volume of jobs that may be scheduled -- are they really needed? Often these jobs get scheduled, then forgotten, and are no longer useful. Or is it a batch oriented job that runs during peak time that should be rescheduled to a more practical time slot? Or is the owner of the job still an employee? I have seen many "ghost" jobs that were once submitted by users who have left the company -- but their reports still run, regardless! One last item about scheduled jobs. See if the jobs are overlapping themselves. When specifying the resubmission parameters, you can have a job start at a fixed time, or reschedule at a time interval calculated when the jobs starts, or reschedule at a time interval after the job completes. I often find jobs scheduled to resubmit some time after the first job starts, like every 15 minutes. Maybe the job used to complete in 5 minutes. Yet, as the database grows, the job may now be taking more than 15 minutes to complete. Hence, it submits the same job, again, when the first one hasn't even completed, yet! Then this can cause contention degrading the performance time of both jobs and the cycle repeats itself and degrades further and further. I would suggest that you schedule jobs to resubmit themselves on a time delay after the previous job completes!

I Didn't Know Those Scripts Were There!


There are some other existing scripts which may be of benefit to you, but I must first put in a very strong disclaimer: CAUTION: Do not blindly run these scripts without analyzing their purpose, impact, and possibly consulting with Oracle Support! Test them in your development environment, first. I must confess that I do not fully understand why all these files are here. I suspect that many are used in the installation/upgrade and use of the applications. I have not found deliberate documentation of these scripts, other than what I can see in some of the script text. Yet, I have used some of these scripts to great satisfaction -- or at least to learn about the usage of certain tables. These scripts are in $FND_TOP/sql. The ones of interest for the concurrent managers are afcm*.sql and afrq*.sql. These range from reports on the concurrent managers, locks, gridlock, etc. You can also find useful scripts in $AD_TOP/sql, too. Again, BE CAREFUL!

Things to Avoid Regarding the Concurrent Managers


These following tips seem to be common sense, but I am still amazed at how often I see these abuses and misunderstandings, so I will mention them... Use of the Reprint Option: Do not allow your users to run jobs multiple times in order to recreate the same output. They can view it offline or do a reprint on a previously run job. There are other third party tools, too, that give more flexibility in viewing and formatting the outputs, too.

Copyright 1999 Pennacle Consulting, LLC

Use Query Enter to Find Your Jobs: If a user cannot see their job on the immediate screen, then scroll down or enter a query to further define the job that they are looking for. I have seen sites where the user couldn't find the job they submitted on the first screen, so they would submit it again! Whoa! on the Refresh Screen: It is very, very common to have your whole company just hitting that refresh key on the concurrent request screen in an effort to see their job go into the queue or its completion status -- especially when performance is already suffering! But this only contributes to the problem! This is one of the most common queries possible. For one, the internal manager scans this table at whatever the pmon interval (the concurrent manager pmon, not to be confused with the Oracle background pmon process) where it scans the fnd_requests table for the next set of pending jobs to be processed. Discourage Multiple User Logins: Multiple logins by the same user to get more work done is often contributing trouble to an over researched system. Sometimes this is unavoidable because the user wears different "functional" hats and must view different screens/data within multiple responsibilities. Some also find it annoying to login and navigate to particular work screens, and then keep idle sessions active until they need them. Try to educate your users that they consume resources (memory, CPU, etc.) every time that they do this. In the newer NCA versions, navigating to different screens and responsibility areas will be made easier via shortcuts and should help to eliminate this abuse. Eliminate Redundancy of Similar Jobs: Users often submit the same job multiple times in the same time frame, distinguished only with minor changes to the parameters. These jobs hit the same tables over and over again and can even create locks and resource conflicts among themselves. Many times they would find the overall throughput to be better if they single threaded the jobs one after the other. This can be managed by user education or by the SYSADMIN single threading the queue or placing incompatibility rules that limit the same program to run with itself. Another variation of this problem is having different users running the same or similar jobs at the same time. It may be better for the SYSADMIN to schedule these jobs to resubmit themselves in the concurrent manager at predetermined intervals and take away the ability for the end-users to submit the jobs, themselves. This should reduce the frequency and burden on the system, yet allow the users to still have the jobs and processes run in a timely manner for their use.

Utilities for Maintaining the Applications


There are many tools and utilities available to you for maintaining and upgrading your applications. Some are well documented, others are more mysterious. I'll describe some of the major utilities. Note that most of these utilities are in $AD_TOP/bin or $FND_TOP/bin.

Patching with the Adpatch Utility


This utility is used for applying patches that you receive from Oracle Support. When you uncompress the patch from Oracle, you will get at least one driver file (patch.drv), a readme.txt, and new code to patch your applications. The patch.drv file is read by the adpatch utility and performs a multitude of tasks. It basically checks the versions of your code to make sure that the patch code is more recent, moves the new code to the proper directories while making a copy of the original suffixed with an "O", updates the library file, links object code to make new executables, compiles or generates code, and logs all of its activities. All of these tasks are performed by other utilities in the $AD_TOP/bin directory, including adlib*, admvcode, adrepgen, adrelink, adfrmgen, etc. Look at the log file for your adpatch task and you will see the utilities that were called. These utilities will match up to the operative key words in the patch.drv file. You should ALWAYS review the readme.txt file prior to applying a patch. You need to verify that the patch is going to do what you intended, and see if there are any other manual tasks to perform either before or after applying the patch. If sql scripts are to be performed, the patch.drv usually moves the sql script to the directory but does not execute it. The readme.txt file will direct you to run adpatch again, and direct you to specify the db*.drv file as the patch input. This will execute the sql scripts.

Copyright 1999 Pennacle Consulting, LLC

Patching Suggestions and Tips


Always make a backup of the directories that will be affected prior to applying a patch -- a patch can be a very nasty thing to rollback! Even though admvcode will make a backup copy of most files suffixed with a capital "O", it is not very reliable in rolling out a patch. This is because sometimes patches are "bundled" up with other patches and the affected files may be patched multiple times. Hence, the backup file "<filename>O" may actually be a backup of the backup! Without your own backup, you cannot rollback to the original. Regardless of what the readme.txt file says, to really be certain which files and activities can occur, look at the patch.drv and db*.drv files. If it isn't in the patch driver file, then it isn't going to happen. When prompted for the patch log file, do not take the default name "adpatch.log". I recommend that you use the same patch/bug number, such as <patch#>.log. This enables you to quickly review the results of your patch without stumbling through reams of previous patches. A running log of patches applied resides in $APPL_TOP/applptch.txt. DO NOT DELETE THIS FILE! It is invaluable when determining which patches have been applied, when, and what actually happened in the patch. I am finding this file even more critical in considering the NCA upgrades and possible Y2K upgrades -- operations where you may have to lay down a new baseline of the applications and reapply your patches to recreate your current configuration! To learn more about the patching process and several other utilities, investigate the log and patch.drv files. Many of these utilities can be run by themselves. You may find use (at least understanding) of these utilities.

Adadmin Utilities
This is an interactive menu available to you to maintain several aspects of your Oracle Applications environment. The menu divides into two categories: Database and File maintenance. The database screen gives you options for creating or maintaining database structures, data, or privileges. Most of these activities are encountered during installation or upgrades. You can run many of these without adverse affect -but you should seek the help of Oracle Support if you are not familiar with these. Be VERY careful not to inadvertently run the Multi-Org option unless you really mean it! Some of these options cannot be run unless your database is NOT in archivelog mode (intended for the installation or upgrade process). You SHOULD be running in archivelog mode if this is your production instance. Many of these menu options can be run standalone by the corresponding utility in $AD_TOP/bin or $FND_TOP/bin. The file maintenance screen does not manipulate the database structure nor data -- just operating system files. Most of these options were intended for the installation or upgrade process. You should be able to run all of these operations without consequence -- yet, I wouldn't advise it unless you are sure of the ramifications and your needs. Again, many of these menu options can be run standalone from the corresponding programs in $AD_TOP/bin or $FND_TOP/bin. When I go to a new client, two of the utilities that I would like to run from adadmin are 1) Verify that all files exist (including extras), and 2) Verify all database objects. Note: This second option to verify database objects no longer exists past version 10.5, but there are other ways to do this. The file report looks at the installation driver files and reports any missing files that are expected to be found somewhere in $APPL_TOP. I look for missing files and verify that we have a good, complete installation. I also look at the extra files to find opportunities for cleanup and customizations! I'll speak more on customizations, but I am particularly interested in whether or not the customizations are done according to Oracle's guidelines. The database object report would show missing, extra, and modified database objects. It would compare the objects to the *.odf files in the application top directories using the odfcmp utilities. Since 10.6, this functionality is gone. You can manually run these reports using the adodfcmp utility in $AD_TOP/bin. Type in adodfcmp by itself to get the parameters, or look in the installation manual to get more information on this utility (and many, many more).

Copyright 1999 Pennacle Consulting, LLC

There is a caveat to keep in mind when reviewing these reports, though. Finding discrepancies from these utilities doesn't necessarily mean that something is wrong. What you are looking at is comparisons to the base installation. Patches (or customizations) can be reasons why there are differences. While I've seen some patches upgrade the driver files, many do not. So, you will have to scrutinize the differences. Still, these can be some very beneficial tools in maintaining your environment.

Installation and Upgrade Utilities


This is an advanced topic, so I do not want to spend much time, here. Yet, I do want to draw attention to the popular utilities -- many of which overlap the adadmin utilities. The installation starts with unloading the software from the media with the adunload utility (actually, you run a script file which runs adunload for you to get the base utilities -- but you can do a manual unload yourself with this tool). Trivia: Ever wonder where the *.inp files went under $FND_TOP/forms? They were actually downloaded to your system by adunload in the installation process. When autoinstall generated your forms, it deleted the *.inp source files for the $FND_TOP/forms -- and only these forms. Why? Because you're not supposed to be messing with these files! However, if you ever delete the FND form executables, you can run the adunload utility to get the *.inp files from the base installation media. After adunload gets your source code to the $APPL_TOP directories, you will eventually proceed with the installation or upgrade process using the adaimgr (autoinstall) utility. This is a menu driven utility which will ask you several setup questions. Eventually you will get to the "upgrade database objects" step in autoinstall. When this starts, it will read the necessary driver files (*.drv) which then calls several other utilities in the proper sequence, depending upon the products you have purchased and your answers to the adaimgr setup menu questions. These installation processes are run by the adworker background processes. You monitor these processes via the adctrl program.

The Adctrl Utility


The adctrl utility is one that you will use with adpatch, as well as autoinstall. In the newer versions of Oracle, the patches can now be multiplexed -- multiple processes running concurrently. A temporary table called fnd_install_processes is created to keep track of the drivers, sequencing, and statuses. Through adctrl you can manage or view the status of these jobs. When the patch completes successfully, the fnd_install_processes table will be dropped. However, if one or more of the drivers fail, the status will be shown and you will be required to resolve the problem. You now go to the corresponding adworker log file.

Adworker Log Files


The adworker log files are found in $APPL_TOP/install/log. They will be numbered as adworker01.log adworkernn.log, depending on how many concurrent processes you specified at the prompt. Find the log corresponding to the process(es) flagged as "failed" in the adctrl menu. Go to the bottom of the log file, find the error, and resolve the problem. If you were able to resolve it, go back into adctrl. If you were able to resolve the problem before the remainder of the other adworkers became dependent upon the failed adworker(s), then you can use the menu options to change the status to "fixed" and restart the failed adworker(s). The process continues until it finishes or encounters more problems requiring you to follow the same procedures mentioned above. However, if all the workers failed or came to a point where it could not proceed until the dependent failed adworker(s) were resolved, the adpatch (or adaimgr) process may have shut down. In this case you will need to restart the process.

Restarting Adadmin Utilities


Some utilities such as adrelink, adaimgr, and adpatch may abort or shutdown prior to completing all of their steps. In this case, you need to refer to the adworker log files to determine the problems and resolve them. If you were able

Copyright 1999 Pennacle Consulting, LLC

10

to resolve them, simply reinitiate the utility, like adpatch. Upon startup, it will check both the restart files found in $APPL_TOP/install/restart and for the presence and contents of the fnd_install_processes table. If the failed workers now have a status of "fixed, restart", the appropriate adworkers are reinitiated and resume progress as tracked in the restart files. If you decide to completely abort the process and start over (careful, this could have adverse affects), then answer the prompts when restarting the utility that you do NOT want to resume the previous unfinished run. (As a safety guard, you will also be prompted to answer the question again phrased in opposite logic.) You may then see an error where the process cannot start because it found the presence of the fnd_install_processes table, hence it could not create it and the job fails. That's okay. Login to sqlplus as applsys and manually drop the fnd_install_processes table. Please refer to your utility and installation manuals for more complete instructions on how to use these programs. These are mentioned here to illustrate that the application installation, maintenance, and patching procedures are not magic. Rather, they are logical procedures which call upon several utilities within the $AD_TOP/bin and $FND_TOP/bin directories. Become familiar with them. Here's a recap on important utilities and programs...

Important Utilities and Tools


You should become familiar with the following tools and utilities. Adadmin tools including the adadmin menu for both database and $APPL_TOP file maintenance tasks. Important tools under $AD_TOP/bin include adaimgr, adunload, adpatch, adrelink, adctrl, adfrmgen, and adodfcmp. Other directories of interest include $FND_TOP/<bin, sql>, and $AD_TOP/sql. The scripts under $AD_TOP/sql are interesting, too, but I'll give the same disclaimer as issued for the $FND_TOP/sql scripts: CAUTION: Do not blindly run these scripts without analyzing their purpose, impact, and possibly consulting with Oracle Support! Test them in your development environment, first. Most of these AD*.sql scripts are your basic DBA tuning and reporting scripts. There are two scripts, though, which alter your database. Let's look at these. ADXANLYZ.sql creates another script which does an "analyze table estimate statistics sample 20%". Now, the Oracle Financials database MUST be set to RULE base optimization in the init.ora file. I am speculating the following hypothesis: The Oracle Applications originally evolved prior to COST base optimization. Hence, the code was originally tuned with RULE optimization in mind. However, as the Applications mature we are seeing more and more stored procedures and code (just look at how the system tablespace expands) and the use of HINTS. Some of these hints override RULE based optimization. In order for some code to take the best optimization path, you need the data statistics. I do not recommend running this script "carte blanche" without evaluating the benefits or consequences. I have found that some applications improve, while others suffer from the statistics. Also, many developers assume that they should be writing code to a RULE based database, as it is configured in the init.ora. Bottom line: Use sparingly and run explain plan to see the best options. I have found a combination of both works -- some code needs the statistics deleted. If you do analyze your tables, remember that you are now taking on a process which needs to be updated on a regular basis to be of benefit. The Oracle Applications also suggest pinning your procedures (again, more and more code is in the form of stored procedures). The ADXGNPIN.sql script generates an all inclusive pinning script for every database object. ADXCKPIN.sql reports the pinned objects and execution statistics. In order to use this correctly, you need to monitor and adjust your shared_pool accordingly. I would advise altering the script to pin the large and popular packages, only. Again, this must be monitored and tuned within your shared_pool and shared_pool_reserved SGA parameters.

Customizing the Oracle Applications


One of the highly sought features of the Oracle Applications is the ability to customize -- and most everyone does. The following advice is perhaps the most important in this entire paper. FOLLOW ORACLE'S

Copyright 1999 Pennacle Consulting, LLC

11

RECOMMENDED PROCEDURES for CUSTOMIZATIONS! Ignoring these guidelines will surely buy you grief and cost you more money in the future. If you are new to the applications, be assured that you will be facing upgrades every 18 to 24 months to stay current with technology. If you have not followed these guidelines, then I strongly recommend that you start bringing your environment into compliance, today. You can find these guidelines in the AOL Reference Manual. The major points are to create a separate application top(s), separate schema(s), and follow the registration process for your custom objects and code. For example: Create a schema named CUSTOM, register the schema, and create a $CUSTOM_TOP directory which will be added to your APPLSYS.env file. If you have extensive customizations, then I suggest that you make separate custom schemas and directories for each module, such as $C_AR_TOP (for "custom" AR applications). If you are altering the base applications and keeping them in the same installed directory tops, or putting customized database objects in the base schemas, then you are indeed in violation of the prescribed methods. You will certainly be facing a terrible time in your upgrades. When Oracle installs or upgrades its applications (even in a simple patch), it assumes that these standard schemas and directories are their products. The Oracle applications may completely drop or overwrite your custom database objects and code rendering your applications to an unusable (and certainly unsupported) state! Please take heed to this warning. As the Oracle Fin-Apps DBA, you must see to it that your developers comply.

Document Customizations and Tars


You should make sure that customizations are documented. These will be invaluable in your next upgrade, believe me! As complicated as an installation or upgrade can be, it is only compounded exponentially by your customizations. Keep track of problems and tars in some kind of tracking system (spreadsheet, text file, sophisticated logging application, etc.). I have no problem in allowing developers to open up tars, but I would strongly suggest that it is ONLY the Oracle Fin-Apps DBA who applies any subsequent patches! The reasons include keeping one central area of control and knowledge for all changes to the Oracle code and data. You should be making sure that an adequate backup of the code and/or database is available prior to applying patches. As discussed earlier, sometimes the only way to completely rollback a significant patch is a restore of code and/or the database. With one central person in charge of patches (namely, the DBA), you will appreciate the precautions necessary for a potential allnighter. Don't let a developer have the luxury of making your life miserable because they were impatient.

Oracle Applications Security and Access


The SYSADMIN has the privilege of setting up users and associating them to roles and responsibilities. This is a heavy burden, just like setting up users and their access in the database. Myth: Setting up users in the Oracle Applications does not (nor does it require you to) create schemas for them in the Oracle database! You are merely creating an account for them in the application which makes an entry to the fnd_user table, an initial password in the fnd_user_password table, and assigning roles to them from fnd_responsibilities. Often the Oracle Fin-Apps DBA does not know the functional responsibilities of the end users within the company. Do not give out "super user" or "SYSADMIN" privileges freely -- that's like giving out DBA privileges to anyone out there (and you should not be doing that, either!). Remember, you are protecting the company's assets -- information assets. Information is power and money. Protect it wisely. Approach this as the real possibility that you could be audited. After all, you are the keeper of the keys to this application and its data! I suggest that a formal process should be setup where managers of the financial and manufacturing business groups must "sign-off" on their employees and indicate which roles they need. If everyone comes back as "super user", then try to educate that manager. If you don't make any progress, then at least you have their signature and authorization on paper to show the auditors!

Copyright 1999 Pennacle Consulting, LLC

12

This process would probably be best implemented as part of HR's new-hire process. On the converse, there should also be an exit process for terminating or transferring employees, too. Then, you could put an end-date on their account or change their responsibilities. Otherwise, you end up with those "John Doe" scheduled reports in the concurrent managers that keep running in your system sucking up valuable resources for months after the employee has terminated!

$APPL_TOP Security and Access


The code is typically owned by applmgr. This can be changed, just keep it consistent throughout the directories. Applmgr does not need to belong to the database group (I advise that it is not). The applmgr group should NOT be shared for write access with your developers, either! Remember, I strongly suggested that the Oracle Fin-Apps DBA should be the only person who can alter these files (and that is typically done by logging in as applmgr). The only exception that is acceptable to me is giving write access to the $CUSTOM_TOP -- and ONLY in the development environment. We discussed the taboo of writing custom code into the base $APPL_TOP directories. You should be using one or more "custom" top directories for your custom code. After a piece of custom code is developed and accepted in development, then it should migrate through sound change-control procedures to test or production environments. Hence, the only place that may be acceptable (depending upon your environment's policies) should be the development $CUSTOM_TOP directories. The only thing that should be modifying the base $APPL_TOP directories are patches that the Oracle Fin-Apps DBA applies!

Modifying AOL Objects


When it comes to modifying AOL objects (such as defining new responsibilities, menu paths, alerts, zooms, etc.), the only supported method was to manually type these in the AOL forms. Following good change control procedures, you would implement and perfect these entries into your development environment. When you were satisfied with your results, you would then have to manually retype these entries in your test and production environments. Using sql to replicate these entries directly to the FND tables is unsupported by Oracle because of the risk of bypassing referential integrity controls. I have implemented a software tool that can automate this, and is supported by Oracle. This is the Object Migrator software product by ChainLink Technologies, who is a certified Oracle partner. They have other products specifically for the use in Oracle Financial Application environments and product lines. I have heard and seen many good things about them.

Importing or Converting External Data


Discourage the practice of inserting data directly into the Oracle application base tables. In most cases (if not all), these are unsupported actions. What is recommended is to load the data into the interface tables (%_interface) and use the Oracle procedures to process this data. The operations and referential integrity can be complex. Consult your documentation and Oracle Support for details. I would also discourage the use of triggers on the base Oracle tables for the same reasons. Triggers can be very difficult in upgrades. Often, they are disabled or dropped, anyway.

Considerations for Using Partitioned Server


Partitioned server is when you utilize two different versions of the Oracle RDBMS ($ORACLE_HOME). One version is linked with your $APPL_TOP code, while the other version runs with the database. For example: $APPL_TOP links with a 7.3.3 $ORACLE_HOME while the database engine is run by an 8.0.5 $ORACLE_HOME. Remember, this configuration must pass what I call the "Holy Trinity Certification", or blessing, for the Oracle Applications. That is, the version of the applications, the OS version, and the RDBMS version are certified together.

Copyright 1999 Pennacle Consulting, LLC

13

As you can imagine, the Oracle Applications is a huge set of code that requires much time and effort in regression testing. Hence, it is usually a generation behind the most current Oracle RDBMS technology. As you can see, there are still very old Oracle tools in use with the version 10.x applications, including CRT, Forms 2.4 (just recently upgrade from 2.3), Oracle Reports, etc. Oracle has implemented a "partitioned server" architecture which allows us to take advantage of new RDBMS technology. The difference is that the Oracle Applications, or $APPL_TOP, must be linked with code from the RDBMS, or $ORACLE_HOME. In a partitioned server architecture, the applications still link with the older certified version of the RDBMS. With the installation of an interoperability patch, the database engine can run off a more recent release of the RDBMS -- which is where the more significant performance and feature rich solutions can be enjoyed. A "physical" partitioned server architecture is a variation of the partitioned server configuration explained earlier. The difference is that the application code ($APPL_TOP) resides on a different server than the database. The applications communicate to the database via sql*net. This solution can aid in maximizing resources by allowing the database to reside on your more powerful server which can be configured and optimized for a database server configuration, while the applications can reside on a less powerful server with different configuration considerations. Keep in mind that you must now accommodate for more sql*net tuning issues. However, the NCA applications thrive upon this multiple tier architecture, anyway.

A Word About Year 2000 Compliance


The Oracle Applications release 10.7+ are Year 2000 compliant (see release notes for your particular hardware platform for details). The major change is the use of the NLS parameter for the date format of DD-MON-RR. However, as with any software product, there are bugs which are uncovered and require patches. The same with Year 2000 issues. You can find the most current and exhaustive list can be found on Meta-Link under the YR2K link. The Oracle Financials is the least of my worries with my clients. To be prudent, though, we are developing and implementing testing scenarios. Not only do you need to look after the Oracle Fin-Apps products, but your OS, third-party software, and your customizations. Only thorough testing can provide you with the confidence required by your company's level of concern and compliance considerations.

Conclusions
Wow! Where does one begin? Quite frankly, the best training is to get involved in an installation or upgrade. I do not recommend that a rookie try to install or upgrade an Oracle Financials environment. You should get professional help from organizations that have a proven track record of upgrades -- but one that is willing to include you on their team and transfer the knowledge. Many consulting companies want to covet this valuable information. In conjunction with real time experience and training, READ the MANUALS! In particular, read the Oracle AOL Reference Manual, Oracle Applications Installation Manual, Oracle Application System Administration Reference Manual, and the Oracle Applications Users' Guide. Even though I am a seasoned veteran, I always read the new manuals to pick up the changes and new utilities. No one does a more accurate job on the application documentation than the original vendor, Oracle Corporation. Investigate the log files from the installation and patches. You will learn a world of information from these log files. Also poke into the directories for hidden goodies. I've mentioned them before -- look at $FND_TOP, $AD_TOP, $APPL_TOP and the product $*_TOP installation directories and files. Look at the environment variables, too -- do a "ps -ef|grep -i appl" and learn what all these variables associated with the Oracle applications mean. Finally, stay connected to the world and information through networking. Subscribe to the application list server, read third-party books (as well as Oracle's), attend your local and international user group meetings, and share your findings and ideas with other Oracle Application colleagues. It's a very broad and ever changing topic!

Copyright 1999 Pennacle Consulting, LLC

14

About the Authors


Anthony Pennington is the founder and president of Pennacle Consulting, LLC in Denver, Colorado. He has been working in Database Administration since 1989, and was first exposed to Oracle Financials at the Super Conducting Super Collider in 1990. Since then, Anthony has worked almost exclusively for Oracle Financial organizations as a Fin-Apps DBA. In the past year, he was the Technical Lead Manager on Oracle upgrade consulting projects for Echostar Corporation and Access Graphics, Inc. Anthony has presented at COAUG and PPOUG on Oracle Financial Upgrades and remains active in the Colorado user groups. Brian Crowley is an experienced Oracle Applications DBA in the Denver, Colorado area. Brian recently was the Technical Lead for Boulder County's application upgrade and has presented at past OAUG and local Colorado user group conferences.

Copyright 1999 Pennacle Consulting, LLC

15

Vous aimerez peut-être aussi