Académique Documents
Professionnel Documents
Culture Documents
A Look at $APPL_TOP
The Oracle Application software is installed in one common area in a subdirectory tree fashion. The top directory for this repository is defined as $APPL_TOP. All other subdirectories will key off of $APPL_TOP. For each product module that you have installed, there will be another $<product>_TOP variable defined. For example, if you have Accounts Payable, General Ledger, and Purchasing, then you will have a $AP_TOP, $GL_TOP, and $PO_TOP. Underneath these directories, you will notice a common directory tree including /bin, /forms, /lib, /sql, /install, /srw, etc. Each directory has its own unique use, and is similar for every product top. You should become familiar with these standards and the contents of these directories. Note that there exists other "tops" not associated with the product modules that your site may use. These are either "shared" products or supporting products that Oracle installs for you. We will explain a few of them in more detail. Note the file called APPLSYS.env under $APPL_TOP. This is one of the environment files that are sourced in to define your environment variables and directories necessary for your applications. It will define $APPL_TOP, your product tops, and other variables. We will discuss some of the "output" directory variables in a moment. A couple of other directories worth noting under $APPL_TOP include $APPL_TOP/install and $APPL_TOP/patch. The /install subdirectory holds many of the utilities used for the initial installation. It also has a /log subdirectory where installation, patches, and other adadmin utility log files will go. You will visit this directory often. Typically, you will have $APPL_TOP/patch for holding patches that Oracle Support may send you (will send you). You can put your patches in another directory, but this is the default. You will become very familiar with patches, too.
A Look at $FND_TOP
Another top that you will find is $FND_TOP. When Oracle started writing the financial applications software, they developed a core set of utilities and objects called Application Object Library (AOL). From these objects, they wrote the "foundation" for the Oracle Applications, referred to as Application Foundation. This foundation code is stored under $FND_TOP. As product modules developed, they hooked these into the Application Foundation infrastructure utilizing the AOL objects. Examples of these AOL objects and FND products include the concurrent managers, quick picks, zooms, etc. Notice that $FND_TOP has a very similar directory tree to the other product modules. You may have noticed that when you fire up the applications, that you call the script file "found" (short for Application Foundation) which executes the $FND_TOP/bin/aiap executable and passes the username/password stored in the variable $GWYUID (typically applsyspub/pub) to get you to your initial login screen. See, its not magic -- just code. Many of the topics that we will cover, especially the Concurrent Managers, are found in $FND_TOP.
A Look at $AD_TOP
Most of the other utilities used by the Oracle Fin-Apps DBA and which we will discuss in detail are found in $AD_TOP. Of particular interest are the /bin and /sql subdirectories. You will find the following executables in $AD_TOP/bin: adaimgr (autoinstall for the installation or upgrade of the software), adpatch for administering patches, and adadmin which is a menu driven utility for maintaining both the Oracle Applications database and software. Many of these utilities in turn call other $AD_TOP utilities.
files depends upon the variables $APPLCSF, $APPLLOG, $APPLOUT, and $*TMP. The $APPLLOG and $APPLOUT variables are typically set to "/log" and "/out", respectively, but they can be set to other values. The location of these subdirectories depends upon the value of $APPLCSF. If $APPLCSF is set to a directory, then all of the product modules (AR, AP, PO, etc.) output will go to a common "log" or "out" directory. The typical setting, though, is to not have $APPLCSF set to any value. When this is true, then the output for the product modules defaults to the specified "log" and "out" directories under the corresponding product module top. For example, concurrent manager jobs run from an Account Receivables responsibility would find the logs and data output in $AR_TOP/log and $AR_TOP/out. I would advise you to not set $APPLCSF. This way, you can more easily find and categorize your output. There is generally a lot of output, anyway, and you can stress the inodes by having too many files. Be sure that your temporary directories, such as $APPLTMP or $REPTMP get cleaned up and don't fill up a file system. Note that any SYSADMIN responsibility output will go in $FND_TOP/log or $FND_TOP/out.
Reset the concurrent queues: UPDATE fnd_concurrent_queues SET running_processes=0, max_processes=0; Remove any completed jobs: (optional) DELETE FROM fnd_concurrent_requests WHERE conc_process_status_code='C'; Set jobs with a status of Terminated to Completed with Error: (optional) UPDATE fnd_concurrent_requests SET status_code='E',phase_code='C' WHERE status_code='T'; Delete any current processes: DELETE FROM fnd_concurrent_processes;
I have listed these in descending order of frequency that I have had to use them. There is a paper available from Oracle Support which describes these and more.
typically try to setup two jobs. 1) One job for "Manager" data -- that's the concurrent manager log files typically found in $FND_TOP/log. I set the frequency to daily, and have it purge down to one day. 2) Another job for the "Request" data -- this is for all other modules outside of the SysAdmin responsibility, such as AR, PO, GL, etc. I typically try to keep only one week's worth of data out there on the system. Your needs and capacity may vary, so set accordingly. This purge process does two things: 1) Deletes rows from the fnd_concurrent_requests tables, and 2) Deletes both the log and output files from the associated $XX_TOP/log or /out directories. If for any reason the file delete did not complete, but the table data was purged, then you will need to manually purge the output files from the /log and /out directories. This can happen if the privileges were incorrectly set, or you replicated a copy of the production database to your development environment, or the file system was not mounted, etc.
Before you tune a "hog", I would suggest that you see if a performance patch has been issued on this program. Many times there is, and this can save you the trouble of tuning it -- and crossing that dilemma of introducing a customized piece of code into your environment. The min/max reports can be modified to sort the jobs in ascending or descending order based upon the execution time or number of times executed. This report takes some interpretative skills. For example, lets say that you identify the job that has the longest execution time... say 4 hours! At first glance, this looks like a sql tuning candidate. A closer look, though, reveals that the minimum time it took to run the job was only 2 minutes -- and that the average time for 300 submissions in one day was only 5 minutes! Now, what you have is some sort of exception. You should cross-reference this job to the "hogs" report -- it should be there. Or, see if it was in the errors. By finding the request id of this aberrant job you can review the details. You may find that the parameters specified a much larger data set, or was incorrect, or many other things. If you finally determine that the job was correctly submitted and that the rest of the evidence points to an optimized sql code set, then you have probably encountered a "non compatible" job! In other words, the job is fine by itself, but may suffer drastically due to contention with other jobs run at the same time. With more detective work, you should strive to find which jobs it is incompatible with and rearrange queues, priorities, or compatibility rules to ensure that they will not run simultaneously. The job schedule report shows all the scheduled jobs that keep submitting themselves, automatically. There are a few things I look for, here. One is the sheer volume of jobs that may be scheduled -- are they really needed? Often these jobs get scheduled, then forgotten, and are no longer useful. Or is it a batch oriented job that runs during peak time that should be rescheduled to a more practical time slot? Or is the owner of the job still an employee? I have seen many "ghost" jobs that were once submitted by users who have left the company -- but their reports still run, regardless! One last item about scheduled jobs. See if the jobs are overlapping themselves. When specifying the resubmission parameters, you can have a job start at a fixed time, or reschedule at a time interval calculated when the jobs starts, or reschedule at a time interval after the job completes. I often find jobs scheduled to resubmit some time after the first job starts, like every 15 minutes. Maybe the job used to complete in 5 minutes. Yet, as the database grows, the job may now be taking more than 15 minutes to complete. Hence, it submits the same job, again, when the first one hasn't even completed, yet! Then this can cause contention degrading the performance time of both jobs and the cycle repeats itself and degrades further and further. I would suggest that you schedule jobs to resubmit themselves on a time delay after the previous job completes!
Use Query Enter to Find Your Jobs: If a user cannot see their job on the immediate screen, then scroll down or enter a query to further define the job that they are looking for. I have seen sites where the user couldn't find the job they submitted on the first screen, so they would submit it again! Whoa! on the Refresh Screen: It is very, very common to have your whole company just hitting that refresh key on the concurrent request screen in an effort to see their job go into the queue or its completion status -- especially when performance is already suffering! But this only contributes to the problem! This is one of the most common queries possible. For one, the internal manager scans this table at whatever the pmon interval (the concurrent manager pmon, not to be confused with the Oracle background pmon process) where it scans the fnd_requests table for the next set of pending jobs to be processed. Discourage Multiple User Logins: Multiple logins by the same user to get more work done is often contributing trouble to an over researched system. Sometimes this is unavoidable because the user wears different "functional" hats and must view different screens/data within multiple responsibilities. Some also find it annoying to login and navigate to particular work screens, and then keep idle sessions active until they need them. Try to educate your users that they consume resources (memory, CPU, etc.) every time that they do this. In the newer NCA versions, navigating to different screens and responsibility areas will be made easier via shortcuts and should help to eliminate this abuse. Eliminate Redundancy of Similar Jobs: Users often submit the same job multiple times in the same time frame, distinguished only with minor changes to the parameters. These jobs hit the same tables over and over again and can even create locks and resource conflicts among themselves. Many times they would find the overall throughput to be better if they single threaded the jobs one after the other. This can be managed by user education or by the SYSADMIN single threading the queue or placing incompatibility rules that limit the same program to run with itself. Another variation of this problem is having different users running the same or similar jobs at the same time. It may be better for the SYSADMIN to schedule these jobs to resubmit themselves in the concurrent manager at predetermined intervals and take away the ability for the end-users to submit the jobs, themselves. This should reduce the frequency and burden on the system, yet allow the users to still have the jobs and processes run in a timely manner for their use.
Adadmin Utilities
This is an interactive menu available to you to maintain several aspects of your Oracle Applications environment. The menu divides into two categories: Database and File maintenance. The database screen gives you options for creating or maintaining database structures, data, or privileges. Most of these activities are encountered during installation or upgrades. You can run many of these without adverse affect -but you should seek the help of Oracle Support if you are not familiar with these. Be VERY careful not to inadvertently run the Multi-Org option unless you really mean it! Some of these options cannot be run unless your database is NOT in archivelog mode (intended for the installation or upgrade process). You SHOULD be running in archivelog mode if this is your production instance. Many of these menu options can be run standalone by the corresponding utility in $AD_TOP/bin or $FND_TOP/bin. The file maintenance screen does not manipulate the database structure nor data -- just operating system files. Most of these options were intended for the installation or upgrade process. You should be able to run all of these operations without consequence -- yet, I wouldn't advise it unless you are sure of the ramifications and your needs. Again, many of these menu options can be run standalone from the corresponding programs in $AD_TOP/bin or $FND_TOP/bin. When I go to a new client, two of the utilities that I would like to run from adadmin are 1) Verify that all files exist (including extras), and 2) Verify all database objects. Note: This second option to verify database objects no longer exists past version 10.5, but there are other ways to do this. The file report looks at the installation driver files and reports any missing files that are expected to be found somewhere in $APPL_TOP. I look for missing files and verify that we have a good, complete installation. I also look at the extra files to find opportunities for cleanup and customizations! I'll speak more on customizations, but I am particularly interested in whether or not the customizations are done according to Oracle's guidelines. The database object report would show missing, extra, and modified database objects. It would compare the objects to the *.odf files in the application top directories using the odfcmp utilities. Since 10.6, this functionality is gone. You can manually run these reports using the adodfcmp utility in $AD_TOP/bin. Type in adodfcmp by itself to get the parameters, or look in the installation manual to get more information on this utility (and many, many more).
There is a caveat to keep in mind when reviewing these reports, though. Finding discrepancies from these utilities doesn't necessarily mean that something is wrong. What you are looking at is comparisons to the base installation. Patches (or customizations) can be reasons why there are differences. While I've seen some patches upgrade the driver files, many do not. So, you will have to scrutinize the differences. Still, these can be some very beneficial tools in maintaining your environment.
10
to resolve them, simply reinitiate the utility, like adpatch. Upon startup, it will check both the restart files found in $APPL_TOP/install/restart and for the presence and contents of the fnd_install_processes table. If the failed workers now have a status of "fixed, restart", the appropriate adworkers are reinitiated and resume progress as tracked in the restart files. If you decide to completely abort the process and start over (careful, this could have adverse affects), then answer the prompts when restarting the utility that you do NOT want to resume the previous unfinished run. (As a safety guard, you will also be prompted to answer the question again phrased in opposite logic.) You may then see an error where the process cannot start because it found the presence of the fnd_install_processes table, hence it could not create it and the job fails. That's okay. Login to sqlplus as applsys and manually drop the fnd_install_processes table. Please refer to your utility and installation manuals for more complete instructions on how to use these programs. These are mentioned here to illustrate that the application installation, maintenance, and patching procedures are not magic. Rather, they are logical procedures which call upon several utilities within the $AD_TOP/bin and $FND_TOP/bin directories. Become familiar with them. Here's a recap on important utilities and programs...
11
RECOMMENDED PROCEDURES for CUSTOMIZATIONS! Ignoring these guidelines will surely buy you grief and cost you more money in the future. If you are new to the applications, be assured that you will be facing upgrades every 18 to 24 months to stay current with technology. If you have not followed these guidelines, then I strongly recommend that you start bringing your environment into compliance, today. You can find these guidelines in the AOL Reference Manual. The major points are to create a separate application top(s), separate schema(s), and follow the registration process for your custom objects and code. For example: Create a schema named CUSTOM, register the schema, and create a $CUSTOM_TOP directory which will be added to your APPLSYS.env file. If you have extensive customizations, then I suggest that you make separate custom schemas and directories for each module, such as $C_AR_TOP (for "custom" AR applications). If you are altering the base applications and keeping them in the same installed directory tops, or putting customized database objects in the base schemas, then you are indeed in violation of the prescribed methods. You will certainly be facing a terrible time in your upgrades. When Oracle installs or upgrades its applications (even in a simple patch), it assumes that these standard schemas and directories are their products. The Oracle applications may completely drop or overwrite your custom database objects and code rendering your applications to an unusable (and certainly unsupported) state! Please take heed to this warning. As the Oracle Fin-Apps DBA, you must see to it that your developers comply.
12
This process would probably be best implemented as part of HR's new-hire process. On the converse, there should also be an exit process for terminating or transferring employees, too. Then, you could put an end-date on their account or change their responsibilities. Otherwise, you end up with those "John Doe" scheduled reports in the concurrent managers that keep running in your system sucking up valuable resources for months after the employee has terminated!
13
As you can imagine, the Oracle Applications is a huge set of code that requires much time and effort in regression testing. Hence, it is usually a generation behind the most current Oracle RDBMS technology. As you can see, there are still very old Oracle tools in use with the version 10.x applications, including CRT, Forms 2.4 (just recently upgrade from 2.3), Oracle Reports, etc. Oracle has implemented a "partitioned server" architecture which allows us to take advantage of new RDBMS technology. The difference is that the Oracle Applications, or $APPL_TOP, must be linked with code from the RDBMS, or $ORACLE_HOME. In a partitioned server architecture, the applications still link with the older certified version of the RDBMS. With the installation of an interoperability patch, the database engine can run off a more recent release of the RDBMS -- which is where the more significant performance and feature rich solutions can be enjoyed. A "physical" partitioned server architecture is a variation of the partitioned server configuration explained earlier. The difference is that the application code ($APPL_TOP) resides on a different server than the database. The applications communicate to the database via sql*net. This solution can aid in maximizing resources by allowing the database to reside on your more powerful server which can be configured and optimized for a database server configuration, while the applications can reside on a less powerful server with different configuration considerations. Keep in mind that you must now accommodate for more sql*net tuning issues. However, the NCA applications thrive upon this multiple tier architecture, anyway.
Conclusions
Wow! Where does one begin? Quite frankly, the best training is to get involved in an installation or upgrade. I do not recommend that a rookie try to install or upgrade an Oracle Financials environment. You should get professional help from organizations that have a proven track record of upgrades -- but one that is willing to include you on their team and transfer the knowledge. Many consulting companies want to covet this valuable information. In conjunction with real time experience and training, READ the MANUALS! In particular, read the Oracle AOL Reference Manual, Oracle Applications Installation Manual, Oracle Application System Administration Reference Manual, and the Oracle Applications Users' Guide. Even though I am a seasoned veteran, I always read the new manuals to pick up the changes and new utilities. No one does a more accurate job on the application documentation than the original vendor, Oracle Corporation. Investigate the log files from the installation and patches. You will learn a world of information from these log files. Also poke into the directories for hidden goodies. I've mentioned them before -- look at $FND_TOP, $AD_TOP, $APPL_TOP and the product $*_TOP installation directories and files. Look at the environment variables, too -- do a "ps -ef|grep -i appl" and learn what all these variables associated with the Oracle applications mean. Finally, stay connected to the world and information through networking. Subscribe to the application list server, read third-party books (as well as Oracle's), attend your local and international user group meetings, and share your findings and ideas with other Oracle Application colleagues. It's a very broad and ever changing topic!
14
15