Vous êtes sur la page 1sur 88

Question: The main differences between RPGIV, RPG400 and the newer ILE version?

Answer(s): There are really only two current RPGs for the AS/400, RPG/400 and ILE RPG. RPG IV is a popularized name for ILE RPG. RPG/400: This is the older RPG that has been around for several years. It used the "Old Programming Model" popularly known as OPM, while ILE RPG uses the ILE model. The following information applies to ILE RPG. ILE RPG: It became available at Version 3. If you are at V3R1 on a CISC machine, I would recommend going to V3R2 to take advantage of the best features of ILE RPG. Some ILE RPG Features: Syntax: All non-external data definitions can now be specified in a D-specifications that are new to ILE RPG. In addition you can define "named constants " that greatly simplify coding in the C-spec's. Also C-spec formats have changed slightly to provide for variable names of up to 10 characters (up from 6 in RPG/400) and longer operation codes. New Operations: Several have been added. One that I like is EVAL which allows you to evaluate a mathematical expression similar to Cobol and other mathematical programming languages such as Basic, FORTRAN, PL/1, etc. Modularity: This is a big plus. You can now write modules (non-executable) in several languages a nd bind them together into a single ILE program. Thus you can use the best language (ILE C, ILE Cobol, ILE RPG, ILE CLP) for a process or use existing modules to write a program. You can also write callable procedures or procedures that function like built-in functions.

Question: I am trying to create a command which will accept five or six parameters. One parameter is called CONO. I would like this parameter CONO to only accept two-character text strings (Z1, AB, JW etc) *OR* the special value of *ALL. The problem I have is with the length of the parameter. If I define it of type *CHAR with a length of two then it will not compile, saying that *ALL is not valid. If I compile it with a size of four then it will accept any string upto four characters (ABCD etc) and does not enforce the twocharacter rule. I know it can be done, as I have seen a non-IBM command do ikt (EXCAMTASK, in the JBA software) but I cannot figure it out for myself. All help very much appreciated! Answer(s): One way to do this is to define the parameter with a length of 2 and specifying in the special value list something like (*ALL xx), where xx is the value the CPP will receive when the command is executed using the special value *ALL. The value you assign to xx can be a value you cannot type using the keyboard (e.g. x'0000') so you don't have to reserve a two-character combination for this special value. HTH Try this PARM KWD(CONO) TYPE(*CHAR) LEN(2) RSTD(*YES) VALUES(Z1 AB JW) SPCVAL(*ALL *A) ... When the value *ALL is entered, you receive the translated value *A in your command processing program. If you allow multiple values eg MAX(n) you should use SNGVAL(*ALL *A) Good luck! You can specify a RANGE (e.g. from "AA" to "ZZ") in the PARM-Keyword. Just prompt with F4, then you will see it. First thought (speaking from memory, I'm away from my AS/400), is to make sure that your PARM statement says something like SPCVAL((*ALL ' ')). Of course, you hereby lose the ability to process a CONO of all blanks (or whatever other value you are willing to give up). If this does not work, let me know and I shall be glad to look further

Try this: PARM KEYWORD(CONO) TYPE(*CHAR) LEN(2) SPCVAL((*ALL ' ')) The special values allow you to override type and length checks on a parameter with special values (imagine that?). Each special value has two values value 1 is what the user types in the command. Value 2 is the replacement value that is sent to your program. Value 2 must be of the same base type and length declared for the PARM. Special values are exempt from most of the validity checking rules and so are the replacement values. In the above example, when the user types CONO(*ALL) your program will actually see " " (blanks) See the CL programmer's guide for details Here's one way: PARM KWD(CONO) TYPE(*CHAR) LEN(2) RSTD(*YES) + VALUES(Z1 AB JW) SPCVAL((*ALL '**')) Here's another: PARM KWD(CONO) TYPE(*CHAR) LEN(2) RST(*YES) + SPCVAL((*ALL '**') (Z1) (AB) (JW)) Note that if the user enters *ALL, your CPP will see **. 1

This really does not resolve the *all option, but we have a program that accepts a parm value of '01' thru '10', or 'AL' for all values. We then have program logic that recoginizes that it needs to handle 'AL' differently. It may be a workaround for you. Good luck.

Question: Is there a easy way to convert a character to decimal? We have a PC file that we are getting weekly and uploading to the AS/400 and then using a RPG program to convert the data to a physical file. The file has a number field at the beginning that can be up to 6 digits long. The numbers are being zero suppressed when we get it. So for example it may be 1, 100, 1000, 10000 and so on. So I moved the field to and array and then moved each digit from right to left to a number field. Is there an easier way of doing this? There is probably a easy solution that I just am not thinking of... Thanks.. Answer(s): The 'atoi' and 'atof' C functions handle conversion from numbers containing decimal points and signs. If the number may contain a decimal point, use 'atof' (it doesn't allow comma - you'd have to xlate ',':'.') Be sure to use half-adjust with atof because the result is floating-point. Here's an example.

H BNDDIR('QC2LE') D ATOI PR D NUM D ATOF PR D NUM D I S D P S C MOVEL C EVAL * > Eval I * I = -100 C EVAL(H) * > Eval I * P = -000100.0000000 C MOVEL C EVAL(H) * > Eval P * P = -000005.6700000 C RETURN

10I 0 EXTPROC('ATOI') * OPTIONS(*STRING) VALUE 8F EXTPROC('ATOF') * OPTIONS(*STRING) VALUE 10I 0 13P 7 '-100 ' NUM I = ATOI(%TRIM(NUM))

P = ATOF(%TRIM(NUM))

'-5.67 ' NUM P = ATOF(%TRIM(NUM))

No reason to use C, when you've got RPG.... Example #1 - Packed Numeric to Alpha: Use the Z-ADD opcode to decompress the packed field, and then the MOVE opcode to place it ito a same sized alpha field. C* C* FIELDA = 7,0(P) C* C C C*

FIELDB = Z-ADD MOVE

7,0(S) FIELDA FIELDB

FIELDC = 7(A) FIELDB FIELDC

Example #2 - Zoned Decimal with two decimal places to a Character field. C* C* FIELDA = 7,2(S) FIELDB = 7,0(S) FIELDC = 7A C* C FIELDA MULT 100 FIELDB C MOVE FIELDB FIELDC C* Example #3 - Character field to Zoned Decimal with two decimal places. C* C* FIELDA = 7(A) FIELDB = 7,0(S) FIELDC = 7,2(S) C* 2

C C C*

FIELDB

MOVE MULT

FIELDA .01

FIELDB FIELDC

Question: I wonder if anyone could help me out...... I'm trying to automate an FTP job using CL. On a daily basis we need to FTP several files to another system and I would like to automate this process using CL, is this possible...... we don't have any additional FTP software just what came with the OS version 3.2 by the way. I can see how to actually connect.... ie FTP ('123.123.91.230') but I'm lost as to where to go next, how do I add my BIN and PUT commands? Answer(s): FTP in batch is very simple. You need three files. 1. A CL program. 2. A file containing FTP commands. 3. An empty file to receive the FTP log messages. The command and log files can have any name you want. Create 2 Physical files each with a single 80 character data field. The command file in this example FTPCMD might look like this. USERID PASSWORD Put your own userid and password here. get LIBRARY/FILE.MEMBER LIBRARY/FILE.MEMBER (REPLACE CLOSE The message file, in this example FTPMSG is empty. The CL looks like this

PGM CLRPFM FTPMSG OVRDBF FILE(INPUT) TOFILE(TOLCLINIC/FTPCMD) OVRDBF FILE(OUTPUT) TOFILE(TOLCLINIC/FTPMS) FTP RMTSYS() ENDPGM
When you run this program it will execute the commands in the INPUT file and write the log to the output file. I have used this method to move source members from a test AS/400 to a production AS/400. When I tried to move a data file it didn't work right.

In the TCP/IP guide there is an example of batch FTP. The Operating system support is extremely weak but here goes... In your CL execute the following code: OVRDBF INPUT TOFILE(MYFTP) TOMBR(MYFTPIN) OVRDBF OUTOUT TOFILE(MYFTP) TOMBR(MYFTPOUT) FTP 'xxx.xxx.xxx.xxx' MYFTP is a source file created with at least a record length of 92 The MYFTPIN member contains a script of commands you want to execute. The first record must contain the user id and password to log in ------------------------------------------Myid mypassword binary namefmt 1 mode s mget remotefile.* qgpl/myfile quit ---------------------------------------The MYFTPOUT member will contain the response to your input script You must examine this file to determine if your script executed properly

We do this exactly in our shop. Look at section 7-36 of the TCP/IP manual. You will see a similar example to what we do: 3

/*-------------------------------------------------*/ /* Process FTP commands to send file */ /*-------------------------------------------------*/ OVRDBF FILE(INPUT) TOFILE(QFTPSRC) MBR(FTPCMD) OVRDBF FILE(OUTPUT) TOFILE(QFTPSRC) MBR(FTPLOG) FTP RMTSYS(SYS01) .... FTPCMD is a member which contains the ftp commands to process (we actually build this member on the fly as username, password and filenames change): username password put localfile remotefile quit ... Member FTPLOG produces output like this: Output redirected to a file. Input read from specified override file. Connecting to host name SYS01 at address xxx.xx.x.xxx using port 21. 220 sys01 FTP server (Version 4.29 Thu Jan 30 14:58:02 CST 1997) ready. 215 UNIX Type: L8 Version: BSD-44 Enter login ID (username): 331 Password required for username. 230 User username logged in. Enter an F TP subcommand. > put LOCALFILE REMOTEFILE 200 PORT command successful. 150 Opening data connection for LOCALFILE. 226 Transfer complete. 1219 bytes transferred in 0.745 seconds. Transfer rate 1.636 KB/sec. Enter an FTP subcommand. > quit

Question: Does anyone have an idea how to make an rpg program which executes an dynamic sql-select-statement where the number and the type of the selected columns are not known. Where are the limits of such an sql-statment ?

Answer(s): Funny because this same subject just came up on the NEWSLINK400 forum. Yes you can prepare a dynamic select statement for a cursor where you don't basically have to know anything at compile time including the columns, the files, etc. However, if you are planning on doing multiple opens and closes of such cursors and then changing the columns or something else like that, then you must re-prepare the select statement and reopen the cursor. An SQL Prepare statement is potentially very slow because the system must build an access plan based on available access paths, etc. Therefore, SQL does allow you to place something called a parameter marker (which is a question mark) in any place where static sql allows you to place a host variable. Excluded from this are things like column lists and file names. So SQL wouldn't permit me to prepare a statement that was formed as follows. EvalSqlStm = 'Select ? from ?' Too bad! However, SQL would allow me to prepare the following statement. EvalSqlStm = 'Select * From Customer Where CuName > ? You substitute for parameter markers with the Using clause of the Open cursor SQL statement. I will show an example below. The following is a program I wrote and checked out with the debugger to make sure it was executing correctly. Notice this example does show how to use parameter markers. Notice the open cursor statement must specify the name of a host variable to substitute for the question marks. Hope this helps.

D GenCust D CuName D CuStat D CustAp D CuOv30 D CuOv60 D CuOv90 D OrdDs D SelectStm D PrmChoice D ChoiceGenCust D ChoiceCustAp D ChoiceOrd D GtName D GtCuOv30 D EqOhStat

DS 25 1 DS 9P 2 9P 2 9P 2 E DS S S C C C S S S 200 1 'G' 'A' 'O' 25 Inz( 'H' ) 9P 2 Inz( 100 ) 1 Inz( 'C' ) ExtName( OrdHdr )

C/Exec SQL C+ Declare DynamCsr Cursor for C+ DynSqlStm C/End-Exec C C C C C C C *Entry Plist Parm Select When Eval PrmChoice = ChoiceGenCust SelectStm = 'Select CuName, CuStat ' + 'From Customer ' + 'Where CuName > ?'

PrmChoice

C/Exec SQL C+ Prepare DynSqlStm C+ From :SelectStm C/End-Exec C/Exec SQL C+ C+ C/End-Exec C/Exec SQL C+ C+ C+ C/End-Exec C C C C C C

Open DynamCsr Using :GtName

Fetch Next From DynamCsr Into :GenCust

When Eval

PrmChoice = ChoiceCustAp SelectStm = 'Select CuOv30,' ' CuOv60,' ' CuOv90 ' 'From Customer ' 'Where CuOv30 > ?' + + + +

C/Exec SQL C+ Prepare DynSqlStm C+ From :SelectStm C/End-Exec C/Exec SQL C+ C+ C/End-Exec C/Exec SQL 5

Open DynamCsr Using :GtCuOv30

C+ C+ C+ C/End-Exec C C C C

Fetch Next From DynamCsr Into :CustAp

When Eval

PrmChoice = ChoiceOrd SelectStm = 'Select * ' 'From OrdHdr ' 'Where OhStat = ?'

+ +

C/Exec SQL C+ Prepare DynSqlStm C+ From :SelectStm C/End-Exec C/Exec SQL C+ C+ C/End-Exec C/Exec SQL C+ C+ C+ C/End-Exec C C/Exec SQL C+ C/End-Exec C

Open DynamCsr Using :EqOhStat

Fetch Next From DynamCsr Into :OrdDs

EndSl

Close DynamCsr

Eval

*INLR = *On

So far so good, but i think i did not express my problem clear enough - second try: A user enters a sql-statement like: select kskos, kv#01, kv#02 from ks, su where kskos = sukos (or any other valid select statement) My little program does not know which tables were concered and how many (and of course which type) of columns will come out of this. So my question is - is this possible and if it is how is it done. I was hoping you weren't expecting to do something this dynamic. Oh well. Let's give this a whack. The SQL programming guide suggests that what you want to do is possible with the SQL DESCRIBE statement. I'm going to cut and paste from that manual and then make some comments. ===Start Manual excerpt=== 8.3.2 Varying-List Select-Statements In dynamic SQL, varying-list SELECT statements are ones for which the number and format of result columns to be returned are not predictable; that is, you do not know how many variables you need, or what the data types are. Therefore, you cannot de fine host variables in advance to accommodate the result columns returned. Note: In REXX, steps 5b, 6, and 7 are not applicable. If your application accepts varying-list SELECT statements, your program has to: 1. Place the input SQL statement into a host variable. 2. Issue a PREPARE statement to validate the dynamic SQL statement and put it into a form that can be run. If DLYPRP (*YES) is specified on the CRTSQLxxx command, the preparation is delayed until the first time the statement is used in an EXECUTE or DESCRIBE statement, unless the USING clause is specified on the PREPARE statement. 3. Declare a cursor for the statement name. 4. Open the cursor (declared in step 3) that includes the name of the dynamic SELECT statement. 5. Issue a DESCRIBE statement to request information from SQL about the type and size of each column of the result table. Notes: a. You can also code the PREPARE statement with an INTO clause to perform the functions of PREPARE and DESCRIBE with a single stateme nt.

b. If the SQLDA is not large enough to contain column descriptions for each retrieved column, the program must determine how much space is needed, get storage for that amount of space, build a new SQLDA, and reissue the DESCRIBE statement. 6. Allocate the amount of storage needed to contain a row of retrieved data. 7. Put storage addresses into the SQLDA (SQL descriptor area) to tell SQL where to put each item of retrieved data. 8. FETCH a row. 9. When end of data occurs, close the c ursor. 10. Handle any SQL return codes that might result. ====End of Manual Excerpt== First of all, doing the things described here is non-trivial to say the least. You have to make sure you have created your SQLDA properly. You have to programatically de cipher the outcome of the SQL Describe statement well enough to know the size of the buffer necessary to hold a row (or record). Or you could simply allocate a very large buffer of say 32,767 (or even smaller) if you know the maximum record length of all possible files on your system. But when you get all done, you need to parse the SQLDA to pick up the type and size of each field. If that isn't bad enough, these 10 steps probably left off the hardest part of all. So I've performed the 10 steps above and I have a record in a dynamically allocated chunk of memory. I have in my SQLDA a description of each field in this chunk of memory. If a particular field is character, I can probably deal with that because of RPG's powerful %Subst and other string-handling BIFs. But what if the field is packed? There are 9,920 possible packed numeric descriptions. I suppose you can have a based data structure where you can overlay these 9,920 possibilities on top of each other. But it seems to me you would also need an RPG Select statement with 9,920 possible When clauses. Since that's not practical, we have to take another approach. Here is what I would do. I would write a subprocedure which accepts a character string of any length from 1 to 16 bytes and returns the numeric value. The simplest thing to do here is have the subroutine accept a varying length field by value. That way you can pass it a fixed length field and not have to pass the length. The subroutine can use the %Len BIF to pick up the length. This subroutine would ignore the number of decimals and assume that the packed field has 0 decimals. It would then examine the string byte by byte and would build up the actual value from the packed value contained in the string. The procedure then returns this value. The SQLDA gives me the number of decimals. From that point forward you can use RPG's %Dec BIF. For example if the procedure returns a value (ignoring decimals) which you put into a 30,0 field called DecVal. Also suppose you store the number of digits into a field called Digits. And if you have placed the number of decimals into another 2,0 (or 3,0 if you're fussy about a silly performance issue) called NumDecs. Then from this point forward, you can refer to your packed field with %Dec( DecVal: NumDigits: NumDe cs ). Obviously this is a lot of work and my guess is you're going to conclude that it is not worth it. But anyway, it was worth it to learn something. I made a mistake (surprise!). >Then from this point forward, you can refer to your packed >field with %Dec( DecVal: NumDigits: NumDecs ). You would actually have to refer to your packed field as follows... %Dec( DecVal / ( 10 ** NumDecs ): NumDigits: NumDecs ) Mike Cravitz NEWS/400 Technical Editor

Question: Can anyone that actually know, explain to me what are data queues, what the benefit of using them are, and how to use them. An IBM book number would greatly be appreciated? I am considering using them in a system we are writing at work to get our AS400 to dial out on an asynchronous line to Trans union for credit checks. This program will be accessed by a number of Service Reps. and I am hoping that a Data Queue Guru will shed some light on the in's and out's. Answer(s): Data Queues are a cross between data areas, and message queues. They are a method for asynchronous communication between programs. A typical use for a data queue is to have a job sitting in a batch subsystem waiting for a data queue entry to be created, and multiple programs dropping entries into the data queue. The ERP system my company uses has a single process to print invoices which is triggered by entries from multiple order entry staff to the data queue. It sounds like your application fits the bill for using data queues. There are API programs to read and write to data queues, and they are quite straight-forward to use. If memory serves, they are QSNDDTAQ and QRCVDTAQ, and they are well documented in the book, although I don't know the number. If you like I can send you examples. Benefits: - performance can be dramatically improved over individual submits if the job is complex - record locking conflicts are eliminated if only one job is updating. - they can facilitate clean modular design Drawbacks 7

- they are hard to document well - the next programmer will have to think to figure them out - they can't really be audited, backed up, or for that matter conveniently examined. - the contents are almost invisible, although smart programmers have written program to read the queue, print the entry, and re write it. - Once the entry is read it is gone; if the program halts the entry is lost. This can be gotten around with an audit file; write a record when the entry is written, nad have the receiver program update a status field when done. Also, data queues don't support data definition, so you do need to use data structures if you intend to pass more than a single data element. In the example above, the data queue holds the order to be invoiced as well as the output queue to place the spooled file and the user to notify when it is printed. Explaining them to a 'data queue beginner' is maybe easiest by comparing them to other objects to see similarities and differences. Then you can get into purpose and usability. A data queue is similar to a database file that has records written to it. One program (or many programs) can send entries to the queue. Each entry is similar to a record. Another program (or many programs) can read entries back from the queue, similar to reading records. Differences to begin with are in formats (record descriptions), reading the same entries more than once and speed. An entry on a data queue has no external description; it's just a string of bytes. If you want something like "fields", you'll have to do all the concatenating and substringing yourself. Normally, an entry is read only once. When the entry is read off the queue, it is gone. The first program to read the entry gets it and then it's gone. (It's possible to get around this, but there's seldom a reason to.) Data queues are designed to provide fast communication between programs. You might have a dozen programs feeding entries onto a queue and a single program receiving those entries. The entries might represent transactions that you want performed against your database and you don't want those dozen programs all doing it individually. You centralize the process in the receiver program. The time it takes for an entry to be sent from one program and be received by another is minimal, less than if you used a file to hold records. Alternatively, you might have one program feeding entries as fast as it can onto a queue and have a dozen programs receiving entries. By having the transactions processed by a dozen programs, you can multiply the work being done. And since each entry is removed from the que ue when it's received, you don't have to worry about another program getting the same entry. The speed is partially achieved by eliminating any overhead done by the system. An example is the way the system handles the space used by a data queue as entries are added and removed. If you start a program up to add entries to the queue but there's no program started to receive the entries, the allocated space gets bigger. When the entries are later received and removed from the queue, the space allocated does _not_ get smaller. You must delete and recreate the data queue to recover excess space if want it back. This means you must know the original parameters used to create the *DTAQ object so you can recreate one to match. (There's an API to get this info that you can get into later.) If you prefer, you can think of a *dtaq as being similar to a message queue. You can send messages from one program and another can receive them from the *msgq. If you do a RCVMSG RMV(*YES), the message is gone from the *msgq, similar to how an entry is removed from a *dtaq. And a *dtaq entry has a format similar to a message; i.e., there's no format except what you create yourself. (Note that MSGDTA() can be used to provide some general formatting with a message.) Entries are ge nerally sent by calling the QSNDDTAQ API and received by calling the QRCVDTAQ API. One handy use for me is in CL programs where you're limited to a single file declaration. If you use these APIs, you can use any number of *dtaqs to simulate physical files, either for passing info from one part of a program to another or for passing to a different program(s). Perhaps start by creating a *dtaq with CRTDTAQ and writing a program to send some entries to it. Then do a DMPOBJ and examine the output. Then write a second program to receive the entries and do a second DMPOBJ. Testing it out can be done with some pretty small CLPs. Data queue APIs are technically described for Version 4 in the OS/400 Object APIs manual on the Systems Programming Support Bookshelf. They work quite easily: One program stores information in then : CALL 'QSNDDTAQ' 90 PARM 'DTQ_PMC' P1DTAQ 10 (name of the dataqueue) PARM '*LIBL' P1DLIB 10 ( libl of the dataqueue) PARM 8 P1LEN 50 (length of answer) PARM P1RC ( answer) The background job reads the information CALL 'QRCVDTAQ' 9192 PARM 'DTQ_PMC' P1DTAQ 10 PARM '*LIBL' P1DLIB 10 PARM P1LEN 50 PARM P1RC PARM -1 P1WAIT 50 (wait till somebody puts something in it) If the background job rece ives a 9 the program stops receiving. Quit simple, but effective. 8

Others described how data queues enable asynchronous communications between multiple jobs running on AS/400. Another aspect is the ability to communicate between PC programs and AS/400 jobs via Client Access APIs.

The data queue is a very simple concept. In your application you would have a single job that handles credit checks. When a credit check is needed, the program talking to the service rep sends a message to the data queue for the credit check job. This "wakes up" the waiting credit check job and it proceeds to dial and do the credit check. When it's done it sends a message back to a data queue for the requesting job, waking that job back up and giving it the results of the check. You can do various things, like have the credit check job check the incoming data queue for new messages needing processing before hanging up the line after completing a credit check. Just use a "dequeue but don't wait" operation in this case, vs the usual "dequeue with wait" operation. For some reason queues are rarely used (or provided as primitives) in computer systems, even though they are one of the most efficient and easiest to manage mechanisms for synchronizing multi-threaded applications. A RPGLE Example: DDAT C 'hello world' C PSNDDQ PLIST C PARM DataQ C PARM DataQLib C PARM DataLength C PARM Data C C PRCVDQ PLIST C PARM DataQ C PARM DataQLib C PARM DataLength C PARM Data C PARM Wait C ...... * * Place an entry in a dataq * C MOVEL 'MyDataQ' DataQ C MOVEL 'MyLib' DataQLib C Z-ADD 11 DataLength C MOVEL DAT Data C CALL 'QSNDDTAQ' PSNDDQ ....... * * Read from dataq until the data read is 'QUIT' * C dqdata doueq 'QUIT' C movel 'MyDataQ' DataQ C movel 'AGCTI' DataQLib C move *BLANKS Data C z-add *ZERO DataLength C z-add -1 Wait C call 'QRCVDTAQ' PRCVDQ * Add code to process the data received C enddo Question: Now I want to compare two strings letter by letter using RPG. Who knows any function I can implement this(except %scan)? Answer(s): I'm not sure what you want to achieve, but to compare two fields character-by-character you could use the following code: (sample definitions) D FIELD1 S 999 D FIELD2 S 999 D LENGTH S 3 0 INZ(%SIZE(FIELD1)) D INDEX S + 2 LIKE(LENGTH) 9

10 10 5 0 50

10 10 5 0 50 5 0

//Wait forever

C C C C

DO IF ENDIF ENDDO

LENGTH INDEX %SUBST(FIELD1:INDEX:1) =

%SUBST(FIELD2:INDEX:1)

Why can't you just compare them? IF Field1=Field2? If Field1 is 7 char long and Field2 is 8 char long, RPG will test with the greatest length padded with blanks so you will test : "America " (with a blank) = "American" and that's false. How about If Trim(Field1) = Substr(Field2,Len(Field1)) Then Shouldn't it be If Trim(Field1) = Substr(Field2,Len(trim(Field1))) ?? But they don't matc h... with or without the blank...so I don't see the point. Unless you want If Str1 = %SubSt(Str2:1:%Len(%Trim(Str1)) you could use the c function strcmp D strcmp PR D s1 D s2

10I 0 1000 1000

ExtProc('strncmp') Value Value

To use this you would have to terminate the string with nulls C Eval rc = strcmp(s1+x'00':s2+x'00) But using strcmp more complicated It should be one 2 parms, strncmp D strcmp D s1 D s2 gives the exact same answer as comparing the strings in RPG, only in a slower and way. of these (two ways to fix the parameters + two different functions (strcmp takes takes 3) PR 10I 0 ExtProc('strcmp') 1000 Const 1000 Const ExtProc('strcmp') Value options(*string) Value options(*string)

D strcmp PR 10I 0 D s1 * D s2 * OPTIONS(*STRING) isn't strictly necessary. D strcmp D s1 D s2 D len D strcmp D s1 D s2 D len PR 10I 0 1000 1000 10u 0 10I 0 * * 10u 0

ExtProc('strncmp') Const Const Value ExtProc('strncmp') Value options(*string) Value options(*string) Value

PR

Question: There is an ILE C example for the Dynamic Screen Manager API in the book but we need one in ILE RPG - If someone has done this yet or knows where to find on - please let me know.

Answer(s): I once got this sample from someone, it's definitely a good start. * * Bind with *SRVPGM QSNAPI * D F3 c x'33' D sa_norm c x'20' D txt s 128 inz('Press Enter to Roll, F3.') D txtlen s 9b 0 inz(32) D err s 8 inz(x'0000000000000000') D aid s 1 D lines s 9b 0 inz(1) D wf1 s 1 D wrtn s 9b 0 10

D ClrScr D mode D cmdbuf D env D error D WrtDta D data D datalen D fldid D row D col D strmatr D endmatr D strcatr D endcatr D cmdbuf D env D error D GetAID D aid D env D error D RollUp D lines D top D bottom D cmdbuf D env D error C C C C C C C C C C C C C

PR

9b 0 extproc('QsnClrScr') 1 options(*nopass) const 9b 0 options(*nopass) const 9b 0 options(*nopass) const 8 options(*nopass) 9b 128 9b 9b 9b 9b 1 1 1 1 9b 9b 8 0 extproc('QsnWrtDta') 0 0 options(*nopass) const 0 options(*nopass) const 0 options(*nopass) const options(*nopass) const options(*nopass) const options(*nopass) const options(*nopass) const 0 options(*nopass) const 0 options(*nopass) const options(*nopass)

PR

PR

1 extproc('QsnGetAID') 1 options(*nopass) 9b 0 options(*nopass) const 8 options(*nopass) 9b 9b 9b 9b 9b 9b 8 0 extproc('QsnRollUp') 0 const 0 const 0 const 0 options(*nopass) const 0 options(*nopass) const options(*nopass)

PR

Eval DoW Eval

Eval If Leave EndIf Eval EndDo SetOn Return

wrtn = ClrScr('0' : 0 : 0 : err) wrtn = 0 wrtn = WrtDta (txt : txtlen : 0 : 23 : 2 : sa_norm : sa_norm : sa_norm : sa_norm : 0 : 0 : err) wf1 = GetAID (aid : 0 : err) aid = F3

wrtn = RollUp (lines : 1 : 24 : 0 : 0: err)

Lr

Question: Does anyone have any examples of calling APIs from CL? Answer(s): /*-------------------------------------------------------------------*/ /* Program Summary: */ /* */ /* Initialize binary values */ /* Create user space (API CALL) */ /* Load user space with member names (API CALL) */ /* Extract entries from user space (API CALL) */ /* Loop until all entries have been processed */ /* */ /*-------------------------------------------------------------------*/ /* API (application program interfaces) used: */ /* */ /* QUSCRTUS create user space */ /* QUSLMBR list file members */ /* QUSRTVUS retrieve user space */ 11

/* See SYSTEM PROGRAMMER'S INTERFACE REFERENCE for API detail. */ /* */ /*-------------------------------------------------------------------*/ PGM /*-------------------------------------------------------------------*/ /* $POSIT - binary fields to control calls to APIs. */ /* #START - get initial offset, # of elements, length of element. */ /*-------------------------------------------------------------------*/ DCL &$START *CHAR 4 /* $POSIT */ DCL &$LENGT *CHAR 4 /* $POSIT */ DCL DCL DCL DCL &#START &#OFSET &#ELEMS &#LENGT *CHAR *DEC *DEC *DEC 16 (7 0) (7 0) (7 0)

/*-------------------------------------------------------------------*/ /* Error return code parameter for the APIs */ /*-------------------------------------------------------------------*/ DCL &$DSERR *CHAR 256 DCL &$BYTPV *CHAR 4 DCL &$BYTAV *CHAR 4 DCL &$MSGID *CHAR 7 DCL &$RESVD *CHAR 1 DCL &$EXDTA *CHAR 240 /*-------------------------------------------------------------------*/ /* Define the fields used by the create user space API. */ /*-------------------------------------------------------------------*/ DCL &$SPACE *CHAR 20 ('LSTOBJR QTEMP ') DCL &$EXTEN *CHAR 10 ('TEST') DCL &$INIT *CHAR 1 (X'00') DCL &$AUTHT *CHAR 10 ('*ALL') DCL &$APITX *CHAR 50 DCL &$REPLA *CHAR 10 ('*NO') /*-------------------------------------------------------------------*/ /* various other fields */ /*-------------------------------------------------------------------*/ DCL &$FORNM *CHAR 8 ('MBRL0200') /* QUSLMBR */ DCL &$FIELD *CHAR 30 /* QUSRTVUS */ DCL &$MEMBR *CHAR 10 DCL &$FILLB *CHAR 20 ('QDDSSRC JCRCMDS ') DCL &$MBRNM *CHAR 10 ('*ALL ') DCL &$MTYPE *CHAR 10 DCL &COUNT *DEC (5 0) /*-------------------------------------------------------------------*/ /* Initialize Binary fields and build error return code variable */ /*-------------------------------------------------------------------*/ CHGVAR %BIN(&$START) 0 CHGVAR %BIN(&$LENGT) 50000 CHGVAR %BIN(&$BYTPV) 8 CHGVAR %BIN(&$BYTAV) 0 CHGVAR &$DSERR + ( &$BYTPV || &$BYTAV || &$MSGID || &$RESVD || &$EXDTA)

/*-- Create user space. ---------------------------------------------*/ CALL PGM(QUSCRTUS) PARM(&$SPACE &$EXTEN &$INIT + &$LENGT &$AUTHT &$APITX &$REPLA &$DSERR)

/*-------------------------------------------------------------------*/ /* Call API to load the member names to the user space. */ /*-------------------------------------------------------------------*/ A: CALL PGM(QUSLMBR) PARM(&$SPACE &$FORNM &$FILLB + &$MBRNM '0' &$DSERR) CHGVAR %BIN(&$START) 125 CHGVAR %BIN(&$LENGT) 16 12

/*-------------------------------------------------------------------*/ /* Call API to return the starting position of the first block, the */ /* length of each data block, and the number of blocks are returned. */ /*-------------------------------------------------------------------*/ CALL PGM(QUSRTVUS) PARM(&$SPACE &$START &$LENGT + &#START &$DSERR) CHGVAR &#ELEMS %BIN(&#START 9 4) /* # OF ENTRIES IF (&#ELEMS = 0) GOTO C /* NO OBJECTS CHGVAR &#OFSET CHGVAR &#LENGT %BIN(&#START 1 4) %BIN(&#START 13 4) */ */

/* TO 1ST OFFSET */ /* LEN OF ENTRIES */

CHGVAR %BIN(&$START) (&#OFSET + 1) CHGVAR %BIN(&$LENGT) &#LENGT

/*-------------------------------------------------------------------*/ /* Call API to retrieve the data from the user space. &#ELEMS */ /* is the number of data blocks to retrieve. Each block contains a */ /* the name of a member. */ /*-------------------------------------------------------------------*/ CHGVAR &COUNT 0 B: CHGVAR &COUNT (&COUNT + 1) IF (&COUNT *LE &#ELEMS) DO CALL PGM(QUSRTVUS) PARM(&$SPACE &$START &$LENGT + &$FIELD &$DSERR) CHGVAR &$MTYPE CHGVAR &$MBRNM %SST(&$FIELD 11 10) /* MEMBER TYPE */ %SST(&$FIELD 1 10) /* EXTRACT MEMBER NAME */

IF (&$MTYPE = 'PRTF ') DO /* ANZPRTFF PRTF(&$MBRNM) SRCFILE(JCRCMDS/QDDSSRC) ENDDO CHGVAR &#OFSET %BIN(&$START) CHGVAR %BIN(&$START) (&#OFSET + &#LENGT) GOTO B ENDDO C: ENDPGM

*/

/* This program was done as an example of working with APIs in */ /* a CL program. */ /* */ /*-------------------------------------------------------------------*/ /* Program Summary: */ /* */ /* Initialize binary values */ /* Create user space (API CALL) */ /* Load user space with object names (API CALL) */ /* Extract entries from user space (API CALL) */ /* Loop until all entries have been processed */ /* */ /*-------------------------------------------------------------------*/ /* API (application program interfaces) used: */ /* */ /* QUSCRTUS create user space */ /* QUSLOBJ list objects */ /* QUSRTVUS retrieve user space */ /* See SYSTEM PROGRAMMER'S INTERFACE REFERENCE for API detail. */ /* */ /*-------------------------------------------------------------------*/ PGM /*-------------------------------------------------------------------*/ /* $POSIT - binary fields to control calls to APIs. */ /* #START - get initial offset, # of elements, length of element. */ /*-------------------------------------------------------------------*/ DCL &$START *CHAR 4 /* $POSIT */ 13

DCL DCL DCL DCL DCL

&$LENGT &#START &#OFSET &#ELEMS &#LENGT

*CHAR 4 *CHAR *DEC *DEC *DEC 16 (5 0) (5 0) (5 0)

/* $POSIT

*/

/*-------------------------------------------------------------------*/ /* Error return code parameter for the APIs */ /*-------------------------------------------------------------------*/ DCL &$DSERR *CHAR 256 DCL &$BYTPV *CHAR 4 DCL &$BYTAV *CHAR 4 DCL &$MSGID *CHAR 7 DCL &$RESVD *CHAR 1 DCL &$EXDTA *CHAR 240 /*-------------------------------------------------------------------*/ /* Define the fields used by the create user space API. */ /*-------------------------------------------------------------------*/ DCL &$SPACE *CHAR 20 ('LSTOBJR QTEMP ') DCL &$EXTEN *CHAR 10 ('TEST') DCL &$INIT *CHAR 1 (X'00') DCL &$AUTHT *CHAR 10 ('*ALL') DCL &$APITX *CHAR 50 DCL &$REPLA *CHAR 10 ('*NO') /*-------------------------------------------------------------------*/ /* various other fields */ /*-------------------------------------------------------------------*/ DCL &$FORNM *CHAR 8 ('OBJL0100') /* QUSLOBJ */ DCL &$FIELD *CHAR 30 /* QUSRTVUS */ DCL &$DEVNM *CHAR 10 /* RMT002P */ DCL &$OBJLB *CHAR 20 ('*ALL QSYS ') DCL &$OBJTY *CHAR 10 ('*LIB ') DCL &COUNT *DEC (5 0) /*-------------------------------------------------------------------*/ /* Initialize Binary fields and build error return code variable */ /*-------------------------------------------------------------------*/ CHGVAR %BIN(&$START) 0 CHGVAR %BIN(&$LENGT) 5000 CHGVAR %BIN(&$BYTPV) 8 CHGVAR %BIN(&$BYTAV) 0 CHGVAR &$DSERR + ( &$BYTPV || &$BYTAV || &$MSGID || &$RESVD || &$EXDTA)

/*-- Create user space. ---------------------------------------------*/ CALL PGM(QUSCRTUS) PARM(&$SPACE &$EXTEN &$INIT + &$LENGT &$AUTHT &$APITX &$REPLA &$DSERR) /*-------------------------------------------------------------------*/ /* Call API to load the object names to the user space. */ /*-------------------------------------------------------------------*/ A: CALL PGM(QUSLOBJ) PARM(&$SPACE &$FORNM &$OBJLB + &$OBJTY &$DSERR) CHGVAR %BIN(&$START) 125 CHGVAR %BIN(&$LENGT) 16 /*-------------------------------------------------------------------*/ /* Call API to return the starting position of the first block, the */ /* length of each data block, and the number of blocks are returned. */ /*-------------------------------------------------------------------*/ CALL PGM(QUSRTVUS) PARM(&$SPACE &$START &$LENGT + &#START &$DSERR) CHGVAR &#ELEMS %BIN(&#START 9 4) /* # OF ENTRIES IF (&#ELEMS = 0) GOTO C /* NO OBJECTS CHGVAR &#OFSET %BIN(&#START 1 4) /* TO 1ST OFFSET 14 */ */ */

CHGVAR &#LENGT

%BIN(&#START 13 4)

/* LEN OF ENTRIES */

CHGVAR %BIN(&$START) (&#OFSET + 1) CHGVAR %BIN(&$LENGT) &#LENGT

/*-------------------------------------------------------------------*/ /* Call API to retrieve the data from the user space. &#ELEMS */ /* is the number of data blocks to retrieve. Each block contains a */ /* the name of a object and information about that object. */ /*-------------------------------------------------------------------*/ CHGVAR &COUNT 0 B: CHGVAR &COUNT (&COUNT + 1) IF (&COUNT *LE &#ELEMS) DO CALL PGM(QUSRTVUS) PARM(&$SPACE &$START &$LENGT + &$FIELD &$DSERR) CHGVAR &$DEVNM %SST(&$FIELD 1 10) /* EXTRACT DEVICE NAME */ INSERT CODE HERE */

/*

CHGVAR &#OFSET %BIN(&$START) CHGVAR %BIN(&$START) (&#OFSET + &#LENGT) GOTO B ENDDO C: ENDPGM Question: I need to use the Retrieve Database File Description (QDBRTVFD) API but I cannot get it to work. Does anyone have an example of how this one works. I need to see whether a file is journaled. Thanks Answer(s): **-- API Error Data Structure: -------------------------** D ApiError DS D AeBytPrv 10i 0 Inz( %Size( ApiError )) D AeBytAvl 10i 0 Inz D AeExcpId 7a D 1a D AeExcpDta 128a ** D FilNam s 10a Inz( 'QADBXREF' ) D FilLib s 10a Inz( '*LIBL ' ) ** D RfFilNamQ s 20a D RfFilNamRtnQ s 20a D RfFmtNam s 8a Inz( 'FILD0100' ) D RfFilOvr s 1a Inz( '0' ) D RfFilRcd s 10a Inz( '*FIRST' ) D RfFilSys s 10a Inz( '*LCL' ) D RfFmtTyp s 10a Inz( '*EXT' ) ** D RfFilInf Ds 4096 D RfFilInfRtn 10i 0 OverLay( RfFilInf: 1 ) D RfFilInfPrv 10i 0 OverLay( RfFilInf: 5 ) D Inz( %Size( RfFilInf )) D RfFilRcdLen 5i 0 OverLay( RfFilInf: 305 ) D RfFilJrnInf 10i 0 OverLay( RfFilInf: 379 ) ** D JrnInf Ds D JiJrnNam 10a D JiJrnLib 10a D JiJrnOpt 1a D JiJrnSts 1a ** C Eval RfFilNamQ = FilNam + FilLib ** C Call 'QDBRTVFD' C Parm RfFilInf C Parm RfFilInfPrv 15

C C C C C C C C ** C C C ** C **

Parm Parm Parm Parm Parm Parm Parm Parm Eval JrnInf =

RfFilNamRtnQ RfFmtNam RfFilNamQ RfFilRcd RfFilOvr RfFilSys RfFmtTyp ApiError %Subst( RfFilInf : RfFilJrnInf + 1 : %Size( JrnInf ))

Return

Question: I have two identical files except for their object names. One is a current production file, the other is a history file with last years data. Of course, I need a program that use these two files as one. But, I can't seem to get a Join Logical to compile. Keeps running into duplicate field and key field names.

Answer(s): Here an example of a multiformat logical file : A R FORMAT1 A FIELD1 A FIELD2 .... A K FIELD1 A* A R FORMAT2 A FIELD1 A FIELD2 .... A K FIELD1

PFILE(FILE1)

PFILE(FILE2)

In the program, you can chain with the file name and you'll get records from both physical files, or the format name and you'll get records only from the specifc PFILE. If you want to update or write a record with this logical file, you must use the format name.

Question: The question I have is how can I redirect the output that goes to STDERR to a file or to the joblog within a RPG-IV program? Whenever I use perror() to print/see the latest error message I see some flashing red lines at the bottom of the screen. Unfortunately my eyes and my brain are to slow to recognize the output. :-( Any suggestions? Answer(s): To redirect stdout or stderr, use an override command, e.g. OVRPRTF STDERR QSYSPRT To see the STDOUT and STDERR after they've flashed on the screen, I have a little command called DSPSTDOUT: Here's the command: CMD PROMPT('Display stdout') Here's the CPP for the command. It must be in activation group *NEW: H dftactgrp(*no) actgrp(*NEW) bnddir('QC2LE') D printf pr extproc('printf') D msg 2a const D newline C X'1500' C callp printf(newline) C return

Question: I'm trying to use the validation list API's from an ILE RPG program. I know this can be done (IBM does it as part of its *ADMIN 16

web server instance) but I'm having trouble converting data types, etc from C to RPG. Anyone have any program samples I could use? Answer(s): Are you talking about QSYADVLE, QSYCHVLE, ... If so, what's the problem with converting the datatypes ? As far as I can see, only character fields (to be defined as A in ILE-RPG, and don't care about the *) and binary fields (to be defined as 9B 0 in ILE-RPG) are required. The API is similar in use than any other API.

I just found your post, and hope the attached can still be of some assistance. D* Provide sample usage program of validation list APIs D* D* To create sample program (call VALIDATE) use: D* CRTBNDRPG PGM(VALIDATE) DFTACTGRP(*NO) BNDDIR(QC2LE) D* D* Refer to Validation List chapter of System API Reference for D* usage details. D* D* get validation list structures from QSYSINC member D* D/copy qsysinc/qrpglesrc,qsyvldl D* D* API Definitions D* Daddvle PR 10I 0 EXTPROC('QsyAddValidationLstEntry') D 20 D 108 D 608 D 1008 OPTIONS(*OMIT) D 1 OPTIONS(*OMIT) Dvfyvle PR 10I 0 EXTPROC('QsyVerifyValidationLst+ D Entry') D 20 D 108 D 608 Drmvvle PR 10I 0 EXTPROC('QsyRemoveValidationLst+ D Entry') D 20 D 108 Derrno PR * EXTPROC('__errno') D* D* Miscellaneous Variables for sample program D* D* The following variable is for the validation list name. This D* validation list must be created prior to program execution using D* CRTVLDL VLDL(QGPL/SAMPLE) D* Dvldl S 20 inz('SAMPLE QGPL ') D* D* The following variable is for API function return value testing D* Dresult S 10I 0 D* D* The following variables are for determining the value of errno D* when API errors occur D* Derrno_val S 10I 0 based(errno_ptr) Derrno_ptr S * D* D* End of miscellaneous variables C* C* Add validation list entry for 'Bruce' C* C* Set entry id length to length of name 'Bruce' C* C eval qsyeidl = 5 C* C* Set CCSID of entry id to Job default C* C eval qsyccsid03 = 0 17

C* C* C* C C* C* C* C C* C* C* C C* C* C* C C* C* C* C C C C C C* C* C C* C* C* C C C C* C* C* C C* C* C* C C C C C C* C* C* C C C* C* C* C C C C* C* C* C C* C* C* C C C C C* C* C* C C C

Set entry id to 'Bruce' eval qsyeid = 'Bruce'

Set encrypted data length to length of 'N1LJDTS' eval qsyedl = 7

Set CCSID of encrypted data to Hex (65535) eval qsyccsid04 = 65535

Set encrypted data to 'N1LJDTS' eval Add the entry for Bruce eval result=addvle(vldl : qsyeidi : qsyeedi : *omit : *omit) qsyed = 'N1LJDTS'

Test for successful add if Verify entry for Bruce eval result=vfyvle(vldl :qsyeidi :qsyeedi) result = 0

Test for successful verify if result = 0

Now attempt to verify 'bad' entry eval eval qsyeid = 'Harry' result=vfyvle(vldl :qsyeidi :qsyeedi) result = 0

if

Incorrect validation has taken place 'inc validate'dsply else Correct validation and rejection has taken place 'correct' dsply end else

Incorrect validation of non-existent entry 'inc invalid' dsply Error on vfyvle, get errno and display it eval dsply end else errno_ptr = errno

errno_val

Error on addvle, get errno and display it eval dsply end errno_ptr = errno

errno_val

18

C* C* Unconditionally clean up added entry C* C* C* Reset entry id to Bruce C* C eval qsyeid = 'Bruce' C eval result=rmvvle(vldl C :qsyeidi) C* C* Return to caller C* C eval *inlr = '1' C return Question: I am looking for any randomize function in AS/400, has anyone done that before? Please advice. Answer() The Basic Random Number Generation (CEERAN0) API generates a sequence of uniform pseudorandom numbers between 0 and 1 using the multiplicative congruent method with a user-specified seed.
________________________________________________________________________ | | | Required Parameter Group: | | ____ _______________________________________ ________ ______________ | | | 1 | seed | I/O | INT4 | | | |____|_______________________________________|________|______________| | | | 2 | random_no | Output | FLOAT8 | | | |____|_______________________________________|________|______________| | | Omissible Parameter: | | | 3 | fc | Output | FEEDBACK | | | |____|_______________________________________|________|______________| | |________________________________________________________________________|

Question: Is there anyway to pass the data library into a DB2/400 stored procedure. I am trying to write select statements using the same table name, but want to use that table from different libraries. I am somewhat familiar with the method of building the SQL statement in a character field and then doing a Prepare and Execute. I'm hoping the re is an easier way to do this. Thanks. Answer(s): How about not coding the library name and then utilizing library lists to accomplish the same affect? You could also use SQL ALIASES starting with V4R3 where you could create a single alias name and the n recreate the alias as needed. You'd get the best performance by having a different stored procedure for each library. Switching between libraries will cause more open overhead. How about a routing Stored Procedure where you pass an input parameter and then call different versions of the same stored procedure (but processing table in a different library) based on that input parm? Kent Milligan, DB2 & Business Intelligence team AS/400 Partners In Development Question:

SQL in a CL program Answer(s): SQL in CL is no problem. Add this line to your CL program: RUNSQLSTM SRCFILE(WWLIB/QCLSRC) SRCMBR(MYSQL)COMMIT(*NONE) MYSQL:
INSERT INTO ELABACK/TAGESVORG SELECT DISTINCT(A7AENB) FROM ELABACK/ASA7SIC; -INSERT INTO ELABACK/ASALSIC SELECT B.ALAENB, B.ALACDA, B.ALABTM, B.ALAUCD FROM ELABACK/TAGESVORG A, ELADTA/ASALCPP B WHERE A.A7AENB = B.ALAENB; -DELETE FROM WEISS/ASALCPP WHERE ALAENB IN (SELECT A7AENB FROM ELABACK/TAGESVORG); INSERT INTO WEISS/ASALCPP SELECT * FROM ELABACK/ASALSIC;

Question: I want to make sure that the two character Unit of Measure code that my user has entered is valid. At the same time, for

19

each code I want to have a corresponding description and equivalent X12 code. My first thought would be to have a 'Unit of Measure Code Master File' with the following fields. UUNMSR, UNDESC, UUOMCD With records like EA Each EA CS Case CA BX Box CA SP Shelf Pack PK But my concern is the performance hit I would take chaining to this file so often, for example when creating a PO to be sent via X12 EDI I would have to chain to it for every detail record. I definitely do not want to hard code this information into the programs that I want to use it on. I suppose I could load a multiple occurrence data structure with this info. But how do I perform the lookup? Are there other, better(?) ways to do this? Answer(s): Instead of loading the file into a multiple occurrence data structure, use an array. Then you could use a lookup function on the array to find the particular element you need. Thanks for the reply. Maybe I am misunderstanding something here, but I thought that an array was made up of single elements. ie. an array of 10 two character codes. This would be fine for validation, but would not allow me to store the other two fields. Am I missing something here? Yes. You missed the possibility to have other arrays contain the info you need! For instance, you have the array with your unit of measure and find your user's input e.g. in element 2. Then, you can access another array with your descriptions with index 2, and so on. You can load the arrays at program initialization time from a file to avoid hardcoding (manually or by means of a prerun-time array [you'll find this in the RPG reference]), or, which is slightly more complicated, but also a common method, first lookup your array, and if you do not find it there, chain to a file. If you find it there, you can put it into the array(s). But I do not think that there is much performance won. The AS/400 doesn't really access the hard disks every time to get records, OS/400 will move the stuff to the main storage, but you do not have to care about that. A more difficult method, but performing very good, is the use of an user index, but that's not quite a point to start for a beginner. Thanks for the reply. I hadn't thought of breaking the data up like that...I was thinking about working with the entire record as a whole from the file. Yes. Alternate arrays are supported. This lets you define 2 arrays that are related. The first array would be your unit of measure. The second array would be a composite field of the description and X12 unit of measure. Use a data structure to split the second array into its subfields. Normally I would just load the arrays in ordered sequence at initialization time and do a lookup for each record when the new lookup is differant from the prior lookup. This approach saves 1 disk I/O request per output record and minimizes the number of times lookup is actually executed. I/O is much slower than lookup. Lookup is slower than reusing the prior values. You don't need to use MO Arrays. You can define arrays in data structures, and even sort with them. For example: DDS D UOMArray 34DIM(100) D UOMKey 2 OVERLAY(UOMArray) D UOMDesc 30OVERLAY(UOMArray:3) D UOMCode 2OVERLAY(UOMArray:33) Now, you load the fields by saying UOMKey(index) = xxx, UOMDesc(index) = xxx, and UOMCode(index) = xxx This way you have all your fields together. If you wish to sort the arrays and keep the indecies intact, simply sort by the subfield of your choice. For example * sort by Description C SORTA UOMDesc
20

* sort by Code C SORTA UOMCode Alternating arrays, who needs em! This is the best array technique I have learned in years. Thanks for the reply. You make some very valid points, especially about not doing the lookup if the last one is for the same code. Since 95% of my items are coded with the same code this should eliminate any performance problems. I had already thought of this after my original post, but in the back of my mind I thought I remembered reading about something that was ideal to this scenario. For a table of this size/complexity (not!), maybe you could just use SETOBJACC to load it into a small memory pool. Then, ignore thoughts of performance degradation. Other than that, if loading into an array doesn't suit your taste, I wouldn't even think twice about the performance aspect unless you're already hitting a performance curve -- in which case you've got other bigger problems. If you only need to see if the entry made exists in your file you can do a SETLL using an indicator in the = position. This technique uses less overhead because no data is brought into the buffer at any time. Also if you are using a CHAIN and the key value does not change from the previous CHAIN, the values still remain in the buffer. This is also less overhead than if the value changes and a read( disk)/write( buffer) actually occurs. It may be the same overhead or less than checking for a changed value in your program before chaining. Just to test the performance hit, I wrote a small program to read the entire article file (88000 records) sequentially. On our model 620, this took 1.5 CPU secs. After adding a chain to our Unit of Measure file the program took 7.1 CPU secs. This shows that a chain takes a significant amouont of time (relatively). If you have a large AS/400 and/or a small article file, you probably won't bother if the job takes 5-10 secs. longer to execute. If you do care, the methods suggested by others (array lookup or chain with array caching), will work fine. If you're using ILE you may write a function that checks the UOM code using SETLL (which requires less CPU than a chain), and functions to retrieve each of the corresponding UOM attributes. These routines may well use arrays, caching, or simply hardcoded data (since they're only specified in one source member, you can move them to a file later if you want to). you make a wrong assumption here. If we talk I/O operations, CPU time is not a concern. The question is runtime. For each sync I/O, the cpu timeslice is ended and the job (after completing the I/o) will have to compete for a new timeslice again. Database I/O is what the AS/400 is designed for. I can't imagine that a simple read to a code table can impact performance in a perceptible manner. Certainly, the coding simplicity of a simple chain is much more desirable than the complexity of other approaches (arrays or data-structures). In many cases simplicity of code is much more desirable than the marginal improvement in response time a caching algorithm would give. However, one thing you should do in whatever design you choose: don't do the lookup if the last lookup was for the same code. This technique alone can save a good percentage of necessary I/O.
Question: I need a way to detect if an IFS file exists from an RPG program. Anyone familar with a way to do this? It needs to be detected by file name. Answer(s):

I created a command that functions similarly to the CHKOBJ command, except for an IFS object. The "heart" of the command is the following C code, which could also be created in ILE RPG by correctly protyping the "access" API being used by the C function. /* ** CHKIFSOBJC * * PARAMETERS: Path to file * * DESCRIPTION: Check for IFS object * * RETURNS: Y if objects exists * N if object does not exist 21

* */ char CHKIFSOBJC(const char* reffile) { if( access(reffile, F_OK) != 0) return 'N'; else return 'Y'; } The only problem with using any IFS API, is that adopted authority does not work, which means the user executing the program must be a uthorized to the entire path, and to the file itself, or else the function will look like the object does not exist. I have solved this problem by front-ending the above code with other code that temporarily changes the job user to a profile with sufficient authority. This uses the QSYGETPH and QSYSETP API's. Hope this helps,

You do not specify what version of RPG. With ILE you can do the following: * FileExists * Nick Roux * 1997/10/02 * * NOTE: Compile with DFTACTGRP(*NO) * * IFS API prototypes * * Access * Daccess PR 10I 0 extproc('access') Dpathptr1 * value Dmode1 10I 0 value * * IFS API Constants * DF_OK S 10I 0 inz(0) * * Some working environment for us * DFile_exists S 10I 0 Dpathptr S * Dpathname S 21 DExists C 'File Exists' DNotExists C 'File does not exist' * * Main{} * C *entry plist C parm filename 20 * Set a character pointer to the file name string C eval pathname = %trim(filename)+x'00' C eval pathptr = %addr(pathname) * Call the IFS API C eval File_Exists = access(pathptr:F_OK) * Did we find it? C File_exists ifeq 0 C Exists dsply C else C NotExists dsply C endif * Thats all folks C move *on *inlr The filename should be supplied as //dir/dir/file, i.e. CALL FILEEXISTS ('//etc/pmap') is a valid call.

Question: I'm trying to create an open query file that sorts on a location field in my item file. Here's the problem, my location is a six digit number: aisle (XX) - rack (XX) - shelf (X) - position (X). I need to sort the file such that all the even racks within an aisle are together as are all of the odd racks within that aisle. IE. I need 01-01-1-1 22

01-03-1-1 01-05-1-1 ... 01-02-1-1 01-04-1-1 01-06-1-1 .... Now here is the OPNQRYF I'm working with. OPNQRYF FILE((IORINVMS)) FORMAT(IORINVCT) KEYFLD((IDIVSN) (IWHALS) (IWHREO) (IWHRAK) (IWHSHF) (IWHPOS) (IITNIM)) MAPFLD((IWHALS '%SST(IWHLOC 1 2)') (IWHRAK '%SST(IWHLOC 3 2)') (IWHSHF '%SST(IWHLOC 5 1)') (IWHPOS '%SST(IWHLOC 6 1)') (IWHREO '*MAPFLD/IWHRAK - (2 * ( *MAPFLD/IWHRAK / 2))' what I'm trying to do here is create a field (IWHREO) that is 1 when the rack field (IWHRAK) is odd and 0 when it is even. For this to work I need IWHRAK / 2 to round down so that for example rack 5 / 2 = 2.5 needs to be 2. Lastly, IWHLOC is the six character location field that is in the file. IWHASL, IWHRAK, are defined in the format IORINVCT as zoned 2 digit no decimal. IWHSHF, IWHPOS, IWHREO are defined as zoned 1 digit no decimal. I'm thinking that I may have to have some sort of work field with a couple of decimal places, or perhaps I need to subtract .25 after the division so that the results will round down when the rack is odd ( 5 / 2 -.25 = 2.25) but up when the rack is even ( 6 / 2 - .25 = 2.75). Any thoughts??? Or can someone tell me or point me to a manual that explains the precision being used here? Answer(s): Ok, this seems to work but I'm open to suggestions on how to improve it. OPNQRYF FILE((IORINVMS)) FORMAT(IORINVCT) + KEYFLD((IDIVSN) (IWHALS) (IWHREO) (IWHRAK) + (IWHSHF) (IWHPOS) (IITNIM)) + MAPFLD((IWHALS '%SST(IWHLOC 1 2)') + (IWHRAK '%SST(IWHLOC 3 2)') + (IWHSHF '%SST(IWHLOC 5 1)') + (IWHPOS '%SST(IWHLOC 6 1)') + (XRAKD2 '*MAPFLD/IWHRAK / 2' *ZONED 4 2) + (XRKD2R '*MAPFLD/XRAKD2' *ZONED 2 0) + (IWHREO '*MAPFLD/IWHRAK + (2 * *MAPFLD/XRKD2R)') + ) All you need to do is have the *MAPFLD become the modulus, or remainder, after diving by two. In OPNQRYF, you can get the remainder by using // instead of / for divide: (IWHREO *MAPFLD/IWHRAK // 2) Note that this will cause even numbers to return 0, and odd numbers to return 1 as you requested. However this sorts the even numbers ahead of the odd numbers, and your message also made it sound like you wanted racks 1,3,5,... prior to 2,4,6,... If this is the case, you can reverse the order by something like: (IWHREO %ABSVAL((*MAPFLD/IWHRAK // 2) - 1)) This is untested, but should get the remainder (0 or 1), subtract 1 (giving -1 or 0), then take the absolute value (giving 1 or 0). I think this will give you odd racks, then even racks. Question: Hello all, Has anyone used the named indicators feature in V4R2 ? Specifically I'm trying to find out how to use them to name a general indicator. Say 50, suppose I have it conditioning an Ouput spec. I'd like to be able to say for example EVAL PrtSeq# = Yes instead of EVAL *IN50 = *ON. I've used them with display files indicator area and they are great !! Using named indicators to signify a keying error for example, EVAL CusNumErr = Yes is WAY better then EVAL *IN50 = *ON. Any help is greatly appreciated TIA Answer(s): Here is how to accomplish what you want. D IndPtr S D IndAra S D PrtSeq# 50

* 1 50

Inz( %Addr( *IN ) ) Dim( 99 ) Based( IndPtr ) 23

Question: Hi, I would like to make a dspdtaara for every dtaara named dsp* to an outfile. Is that possible? (DSPDTAARA doesnt do it) (My problem is that we have a dtaara for every Session (Terminal or PC) and in that dtaaras is saved, which printer is to be used from the Session - and I would like to check this Data) Answer(s): With some quick and dirty programming this can't be a problem. I would suggest to use following steps; 1. DSPOBJD the necessary *DTAARA to an outfile 2. Write a CL that reads this outfile and does a RTVDTAARA for each of them 3. Call an RPG program for each of them with name and contents to write to a file.

Question:

CVTDAT command to convert an *MDY to a *LONGJUL Answer(s): *************** Beginning of data *************************** Pgm Dcl Dcl CvtDat &FromDate &ToDate Date( ToVar( FromFmt( ToFmt( *Char 06 '010198' *Char 08 ) + ) + ) + )

&FromDate &ToDate *MDY *LongJul

EndPgm ************************************************************* Question: Currently I get a list of jobs, by user, and place that into a user space. Unfortunately, when I push that list into a UIM interface for a user to scroll and select from the list is not sorted. I would like to sort the data in the user space. Does anyone know of a resource that I can use? I did not find a sort api anywhere...... My other choices are to sort the list in the uim (can this be done?) or put the list into a phyical file and do the sort there. I don't want to do either of those :-( Answer(s): You can use the QLGSORT API, or a user index. I would suggest reading the user space into an overlaying array. Then you can sort by any field in the array very easily... something like this... D USpaceArrDS100DIM(9999) D UserName 10overlay(USpaceArr:1) D JobName 10overlay(USpaceArr:10) etc.... (hope this is right... not at work....) The size of USpaceArr should be the total of bytes from all the fields defined using overlay. This way, you can sort by any subfield using SORTA keeping the data intact and sequenced. Refer to subfileds as UserName(i) or JobName(i) as you would any other array element. Hope this helps! Question : What are the attributes of a JOB Answer(s): Status of job . . . . . . . . . . . . . . . : Current user profile . . . . . . . . . . . : Job user identity . . . . . . . . . . . . . : Set by . . . . . . . . . . . . . . . . . : Entered system: Date . . . . . . . . . . . . . . . . . . : Time . . . . . . . . . . . . . . . . . . : Started: Date . . . . . . . . . . . . . . . . . . : Time . . . . . . . . . . . . . . . . . . : Subsystem . . . . . . . . . . . . . . . . . : 24

ACTIVE TRAIN11 TRAIN11 *DEFAULT 06/06/05 17:12:53 06/06/05 17:12:53 QINTER

Subsystem pool ID . . . . . . . . . . . . : Type of job . . . . . . . . . . . . . . . . : Special environment . . . . . . . . . . . . : Program return code . . . . . . . . . . . . : Controlled end requested . . . . . . . . . : System . . . . . . . . . . . . . . . . . . :

2 INTER *NONE 0 NO S103DCHM

Question: Does anybody know how I can high light a line of code or comment in my RPG source we I am editing it with SEU? Answer(s): Are you interested in causing specific lines of source to appear highlighted when you view the member in SEU? or do you want the current line to be automatically highlighted whenever you e dit it? For the first, you can imbed the hex value for the display attribute (highlight, blink, underline, color, etc.) directly in the source statement. I do that by copying it in from another member. I have a source member named COLORS that has one line for each attribute that I want. I copy the line that I want into the member I'm editting and type over it. (I originally created this member by doing STRCPYSCN to an outfile while viewing various panels that had different display attributes on different fields -- STRCPYSCN includes attribute bytes in the output. You can then get that file into a source member and edit it to arrange things as you like. Not very high-tech, but it was simple.) For the second, you'll either need to install something other than SEU to do your editting or rely on the facility that SEU provides. SEU will highlight the line number if you place the cursor on a line and press . Essentially, SEU shows you the line that you just changed, not the one that you're changing 'now' (which I thought was what you asked for). As far as I know, highlighting the line number is as far as it goes for SEU. There was a program in the November 1993 of NEWS/400 magazine called "SEU in Colors". This is parm driven and can be modified. Alternately, you can use DBU in Hex mode (f9 multiple record display) to enter hex codes for source color. You must write hexadecimal value '22' on 5th position of the command RPG line. You cannot do it from within SEU, you have to write a program that reads a source file and inserts hex code 22 before the text that you wish to highlight. Use the BITON/BITOFF command to setup the hex field and insert it at the begining of the text that you want to highlight. Once you have got one highlighted line you can copy it on from within SEU. Looks quite pretty for comment lines (but it's not to everyones taste) The attribute byte that does highlighting can't be entered on the keyboard, so you'll need another way. One option is to use a program that does it for you (for example on all comment lines), or copy a line with the attribute byte from another source (that way you can use Copy-Overlay in SEU). The best way to do this, you must insert the hexadecimal code for highlight, underlined an so on. You have a Byte that contains 8 Bits. Bit 0 = 1 Bit 1 = 2 Bit 2 = 4 Bit 3 = 8 Bit 4 = 16 Bit 5 = 32 Bit 6 = 64 Bit 7 = 128 -----------255 this is the maximum of one Byte The following table show you, which bit you must activate for the attributes: Display : BIT 5 = Hex : 20 Highlighted : BIT 5 + BIT 1 = HEX : 22 Underlined : BIT 5 + BIT 2 = HEX : 24 Reversed : BIT 5 + BIT 0 = HEX : 21 Blinked : BIT 5 + BIT 3 = HEX : 28 Seperator : BIT 5 + BIT 4 = HEX : 30 NonDisplay : BIT 5 + BIT 0 + BIT 1 + BIT 3 = HEX : 27

25

You can combine this BITS. An example you want to show a string between a field "underlined and highlighted" the you must combine the following BITS: BIT 5 + BIT 1 + BIT 2 = DEC = 32 + 2 + 4 = 38 , HEX = 26 Also you must move X'26' to the field position. And at the end you must set only the BIT 5.

RPG record locking Question: Hi everyone. We have a problem with several users trying to acces the same record , at the same time. The timeout parm on the logical is set to 60 sec. and ma y not be changed, because of another problem, but that another story. :-) Is there anyway to tell there is a lock on a record without having to wait until the request times out.?? I use rpg/400. Please help.. Answer(s): Maybe you can create another logical file with the same key, using the same DDS but giving the object a different name. That way you don't have to use OVRDBF if at creation time you set the waiting time to just a few seconds. In the other hand you have to determine if that is a good option; maybe you have already a lot of LF's and adding a new one would slow down you system's performance. I hope you find what you need between all the answers. :-) Great. OSITim has mentioned OVRDBF in the first answer to the question. If you prolong the time, the user doesn't know why she/he cannot proceed. Most users will think that the AS/400-program is looping or crashed, just like their PC-programs do from time to time. Or they think that the AS/400 is extremely slow and they ask whether to buy a Pentium-processor for the AS/400. They might even use Attn-2 to cancel or turn off the terminal (don't laugh! This shit happens!) to get rid of the Input Inhibited syndrom. This is the worst case scenario, as the application program is most likely in a dangerous situation, from the point of view of the data. --- So if the techniques described so far aren't good enough, combine them with the old-fashioned way: some fields in the database that say "This record is locked by interactive jobnbr/user/job since timesta mp". The applications are not allowed to lock records when they wait for user's input. They fill the fields mentioned above instead. They have to use error indicators on every update/write, however. If something goes wrong, the program should do what the MIS personnel would do. (An AS/400 is said to be operator-less. Programmers should pay attention to this point, especially when they have an OS that enables them to!) Batch programs do not change the locked-fields, except they change the same fields that the user is enabled to change. But this should be avoided by means of mutexes or a self-written mechanism. If this is not possible, the batch-program has to exclude soft-locked records. (10 of 11 records processed - 1 not processed. Does this sound familiar?) Applications are not allowed to crash, they have to monitor everything (as mentioned above). Otherwise the soft-locker-fields still soft-lock the record, although the application isn't active any more. (Extremely uncomfortable with TCP/IP and changing device names!) That's why i suppose to store the job number, user and job name. If the application runs into a "soft-lock", it might check whether the other job is still running. So, put as many thought-pieces together as you want and build a solution. OS/400 offers enough power to do it. You can overwrite the recordwait time in a CLpgm before calling the RPG pgm or in a QCMDEXC call before opening the file in the RPGpgm itself I have this problem and the solution i came up with is: 1. use the error indicator on your chain command 2. immediately after your chain check the error indicator 3. at this point you can send the user a message telling them that the record is in use or as i did call a cl to send the operator/mis person a message to the effect tha t ???? has a record lock. 4. they can then find the person who probably went on break with the record left on their screen and have them get out of it. 5. you can then let the user enter a "r" to retry and have the program loop back to the chain command or have the mis person handle the retry and the program automatically loop back to the chain. That's right... UserB can be batch just as easily as interactive. In one application, with five users updating a database of 25,000 applicants and 40,000 certificate records, the record-lock condition happened around twice a year over a period of six years. No batch was involved, so take care on taking the ris Another scenario: 1. UserA reads record 1 (nolock) and sits on it. 2. Another (batch) process changes information in the record that is not on UserA's screen. 3. UserA wants to update the record, but is informed that the record has changed. How annoying! We generally take the risk that 2 users might interfere and don't worry about it. In the more than 10 years that my company is in business we never had a complaint from any of our about 200 customers (which doesn't mean that it did not happen, of course). We once developed a program template where only the fields that were modified on the screen were output to the file, but it was a lot of coding, so we dropped it. Since the original poster is new to the technique, it should be pointed out that the record contents should be rechecked before the update occurs. 1. UserA reads record 1 (nolock) and sits on it. 2. UserB reads record 1 (nolock) also. 3. UserB quickly re-reads the 26

record (with lock) and updates it. 4. UserA finally re -reads the record (with lock) and updates it also. If the program doesn't verify that the record contents have changed between ste ps 1 and 4. UserA will overwrite UserB's changes. You might try using the error indicator on the CHAIN or READ operation you are using. I believe it's the LO column, but not sure. On the other hand, why is everyone locking a record for such a long time? Is this a file update program that someone likes to sit on for a while, and not realize the problem they are causing? Or, could it be solved by using a nolock on the input operation and then reinputting (ie. CHAIN, READ) when actual update occurs? Then again, you could use a dataq attached to the display file to kick someone off if they sit on the same record for more than say 45 seconds or so. This could be a design problem. I would look into using the nolock option on your input operation until right before you do the update as a first solution. How is a SETLL and a READ different from a CHAIN on an input only file? This is was you're saying, right? If you are not doing updates, then use SETLL then READ to retrieve the record. If you are using CHAIN, then use the 'N' to not lock the record. Hope this helps. When you chain to the file, also use a low indicator. If it comes on, the record is unavailable -so give your user a message to try later. Also, you might want to explore techniques that limit re cord locking. If you can't *change* the WAITRCD parameter, then why don't you try using the OVRDBF with WAITRCD(0) in those programs that you don't want to have wait.

OVRDBF and SECURE() keyword in an ILE environment Question: Does anyone know what the SECURE() keyword is supposed to do on the OVRDBF? It seems to have no effect at all. Also, is it true that OVR's are not scoped to the call level in a native ILE application but rather they are scoped to an activation group name? IOW, if I have programs A, B, and C all compiled to run in activation group FUNNY, and program A calls, B calls C, and C does an override to file PF, and then returns to B which in turn returns to A which performs an OPEN on file PF -- will program A use this OVR? Did you get that? Answer(s): The ILE program must be running in an ILE activation group for activation group level scoping to take effect. If it is running in the default activation group, call level scoping will be in effect. Thanks for the tip! I'll go back and re -read your News/400 article, and perhaps we will change our OVR commands to use *CALLLVL. Actually, I'm not even sure this will work. You see, we are creating a proecdure called CRTHUBDDM() that the user will call from their programs. This procedure will create a DMM to the "hub" machine Db, and override the "F" spec to use this DDM file. The problem is, trying to determine if there are any outstanding overrides in effect already against this file that the tool will OVR() unbeknownst to the programmer. If you do the following: PGMA issues OVRDBF FILE(FUNNY) TOFILE(*LIBL/FUNNY) SECURE(*YES) This then calls PGMB which calls PGMC PGMC issues OVRDBF FILE(FUNNY) TOFILE(QGPL/FUNNY) SECURE(*NO) <--the default value PGMC then tries to OPEN file FUNNY. The original override will be the one in effect. If the second OVRDBF also specified SECURE(*YES) then the second override would be in effect. If an ILE program issues an OVRDBF command and the OVRSCOPE parameter is left at *ACTGRPDFN, then the scenario you describe will be true. If you specify OVRSCOPE(*CALLLVL), then it will work the "old" way. We were severely burned by this when we converted everything to ILE. I had a tech tip published in News/400 about this a year ago or so. We just went through all of our CL and specified the OVRSCOPE(*CALLLVL) to make the application work as before. Back to the SECURE issue, one interesting situtation I bet that comes up, is that after PGMC goes out of scope, the OVRDBF command that it issued would still be in effect, negating the original OVRDBF. If *CALLLVL were used, then the original OVRDBF would come back into the picture.

Override files with out using the OVRDBF command Sometimes an ILE RPG program needs to operate on several different files serially (e.g., a print utility that processes all the data members of a particular file object), opening and closing each file in succession. In these cases, you often want to avoid the overhead of program initiation and instead just open each file in turn within the program. But to do so, you must override the database file using the OVRDBF command, which normally requires the existing CL program. You can, however, execute the OVRDBF command within the RPGLE program using the QCMDEXC API. In either case we are taking the help of another program to override the file. 27

Can't we do this with out using any program or even without using the OVRDBF command? The answer is yes; we can achieve this in V5R1 using EXTFILE and EXTMBR operation codes. Code Let us consider a file FILE1, which is having three members say MBR1, MBR2 and MBR3. Now I want to process all members' data in a single program with out using OVRDBF, how we can do? See the following sample code. FFile1 if e Disk Extfile(FilNam) ExtMbr(MbrNam) F UsrOpn * D FilNam s 10a Inz D MbrNam s 10a Inz * * Process first member data. * C Eval FilNam = 'FILE1' C Eval MbrNam = 'MBR1' * C Open File1 C Read File1 * C DoW Not %Eof(File1) C EndDo C Close File1 * * Process second member data. * C Eval FilNam = 'FILE1' C Eval MbrNam = 'MBR2' * C Open File1 C Read File1 * C DoW Not %Eof(File1) C EndDo C Close File1 * * Process Third member data. * C Eval FilNam = 'FILE1' C Eval MbrNam = 'MBR3' * C Open File1 C Read File1 * C DoW Not %Eof(File1) C C C C EndDo C Close File1 With this code we can override the files without using the OVRDBF command and QCMDEXC API.

Question: To redefine the length of a numerical field of rows in RPG IV Example: I have rows with a numerical field 15,4 and would want that in the pgm it was seen like 13,2. Part several the methods to move it in a comfortable field of etc, I ask if specific instruction exists one. Answer (Italian) You could use one DS: D DS D NUMBER 15S 4 D 2 NUmerino 13S Overlay(numero)

28

The thing works also with the externa l DS, must lend attention to the problem of the virgola: if the number of entire figures remains equal not there are problems otherwise you must make this consideration: x = Number figures intere(Numero) - NumeroCifreIntere(Numerino) +1 D DS INZ D NUMBER 15S 4 D 2 NUmerino 13S Overlay(numero:x) Example D DS INZ D NUMBER 15S 4 D NUmerino 11S 2 Overlay(numero:3) How do I right justify a character field in RPG IV? You can use EVALR: D INPUT D OUTPUT C

S S EVALR

24 50

INZ('1234567890 INZ(*ALL'X')

')

OUTPUT = %TRIMR(INPUT)

BINARY(4) means a 4-byte binary number. In RPG III, this means a subfield of a data structure that is defined with 4 bytes, and has the 'B' type. In RPG IV, there are two kinds of 4-byte binary number: the 10-digit integer or the 9-digit binary. The 10-digit integer is better when dealing with APIs. If you define an integer or binary number using length notation (no from-position), you give the number of digits. 10I-0 or 9B-0. A very common error is to define a BINARY(4) field or parameter using length notation as 4B-0. This always causes problems calling the API. Why does RPG's 'binary 4' drop the high-order digit? Well, RPG's 'binary' data type does not support the full range of numbers possible in a binary number. Maybe I need to back up. A binary number (in the sense the manual uses the word binary) is one whereby each binary digit is used to represent a numeric value in increasing powers of two. So, a 2 digit binary number can hold numeric values as high(!) as 4: 2 (binary) to the 2nd (two digits) power = 4. An 8 digit binary number can hold numeric values as high as 2^8, or 256. A two byte binary number has 16 bits (digits) so it can hold numeric values as high as 2^16, or 65536. (I'm ignoring the sign for this discussion.) An RPG 'binary' data type can be two bytes or four. Sneaking a peek at the RPG Reference, Chapter 10 (Data types) we can see that the 2 byte 'binary' gets defined with a length of 4. Theoretically a two byte field should be able to hold a value up to 65536 as described above. But with length 4, we can only store up to 9999! That high-order digit simply can't be stored in a 4 digit field. An RPG 'integer' data type is a Real, True, Honest implementation of the binary data format (as laid out in the RPG Reference.) 'Integer' includes interpreting the highest-order bit as the sign, so a two byte integer actually can hold values between -32767 and +32767 (2^15=32768). The 'unsigned' data type is what I was describing above. Can I pass a variable to an RPG program and have it open the referenced file? Can I use another file to hold a list of fields I want to read? Both of these questions are asking if RPG can use indirection to get files fields instead of declaring them directly in the program. The answer is a qualified yes. Using SQL, you can construct a dynamic SELECT statement to fetch data into your program. What you actually do with the buffer is up to you. The following sample code has no error handling and assumes that the values you are looking for are character, not decimal. If you try it with a decimal, you'll get a -303 SQLCOD, telling you that the host variable is not compatible. It's also missing the WHERE clause on the prepared SELECT statement - that's where you'd select "that particular customer." Also, you should probably use SQLSTT instead of SQLCOD. Here is the source code: A R RFLDNAME A*
29

A A* A H Debug

FLDNAME K FLDNAME

10A

c* Example of indirect reference to fields c* CRTSQLRPGI *CURLIB/FLDNAMES Dbgview(*Source) Objtype(*Module) c* CRTPGM PGM(FLDNAMES) ACTGRP(QILE) DETAIL(*BASIC) d d d d d d TRUE FALSE DBField DBFieldState FldEOF FldMsg s s s s s s Sds 244 pr 253 like(DBFieldState) like(DBField) const 10i 0 10a like(TRUE) like(TRUE) 50 inz(0) like(TRUE) inz(-1)

d d JobName d CheckMand d FldName

c* Spin through the database file looking for field names c/exec sql c+ declare Field cursor for c+ Select FLDNAME from FLDNAMES c+ order by FLDNAME c/end-exec c/exec sql c+ Open Field c/end-exec C Eval C DoW c/exec sql c+ Fetch next from Field into c/end-exec C If C Eval C Else C Eval C Eval C C FldMsg Dsply C EndIF C c/exec sql c+ Close Field c/end-exec C Eval *InLR = *On EndDo FldEOF = FALSE FldEOF = FALSE :DBField SQLCOD <> 0 FldEOF = TRUE DBFieldState = CheckMand(DBField) FldMsg = 'Field ' + %trim(DBField) + ' = ' + %editc(DBFieldState:'X') JobName

* Check to see that the passed in field name contains * data. Since this is a boilerplate, error handling is minimal. p CheckMand b d CheckMand pi like(DBFieldState) d FldName like(DBField) const d FieldState d SqlStm s s like(TRUE) 512a
30

d FldData c

s Eval

512a FieldState = FALSE

* We'll prepare a dynamic SQL statement to see if the field * contains data or not c/exec sql c+ Declare FldTest cursor for DynFldTest c/end-exec c c eval SqlStm = 'Select ' + %trim(FldName) + ' from Master'

c/exec sql c+ Prepare DynFldTest from :SqlStm c/end-exec c/exec sql c+ Open FldTest using :SqlStm c/end-exec c/exec sql c+ Fetch next from FldTest into :FldData c/end-exec C C C C C c/exec sql c+ Close FldTest c/end-exec c p Return e FieldState If if Eval endIf EndIF SQLCOD = 0 FldData <> *Blanks FieldState = TRUE

31

How do I replace *ENTRY with a prototype? h dftactgrp(*no) dmain d numberIn d*ENTRY dmain d numberIn c c c c c c pr 15p 5 extpgm('RPGIVPIPGM')

pi 15p 5 %parms > 0

numberIn

if dsply else 'Need number!'dsply endif eval

*inlr = *on

In a 'real' program, the "pr" section would go in a /copy member to be used here and in all the calling programs. This helps ensure that everybody is using the same parameter definitions. Rob Berendt had the excellent suggestion that '*ENTRY' be placed next to the "pi" section so that people scanning for the *ENTRY parameter list would be directed to the right place.

3.47 DSPATR (Display Attribute) Keyword Use this field-level keyword to specify one or more display attributes for the field you are defining. You can specify the DSPATR keyword more than once for the same field, and you can specify more than one attribute for the same keyword. However, each attribute (for example, UL), can be specified only once per field. Note: The effects of attributes may not appear on the display, depending on the hardware or software emulator you are using. The format for the keyword is one of the following: DSPATR(attribute -1 [attribute -2 [attribute -3 [...]]]) or DSPATR(&program-to-system-field); If you specify more than one attribute for the same field, whether in one keyword or in separate keywords, each attribute that is specified (and in effect when the field is displayed) affects the field. For example, if you want a field to be displayed with its image reversed and with high intensity, specify either DSPATR (RI HI), or DSPATR(RI), and DSPATR(HI). The program-to-system-field parameter is required and specifies that the named field must be defined in the record format, alphanumeric (A in position 35), length of one, and usage P (P in position 38). The program uses this P-field to set the display attribute for the field this DSPATR keyword applies to. The name P-field is used for multiple fields with the record being defined. One DSPATR P-field is allowed per field. The P-field contains the display attribute and identifies whether the field should be protected. See "Valid P-field Values" in topic 3.47.3. The following are valid attributes for the first format of the DSPATR keyword: For All Fields Display Attribute Meaning BL Blinking field CS Column separator HI High intensity ND Nondisplay PC Position cursor 32

RI Reverse image UL Underline For Input-Capable Fields Only Display Attribute Meaning MDT Set changed data tag when displayed OID Operator identification PR Protect contents of field from input keying SP Select by light pen Notes: 1. If you specify the UL, HI, and RI attributes on the 5250 display station for the same field, the result is the same as if you had specified ND. 2. If OID is specified, then SP should not be specified. Neither OID nor SP can be optioned unless specified with another display attribute. 3. Display attributes BL, CS, HI, RI, and UL can also be specified at the file, record, or field level as parameter values on the CHGINPDFT keyword. 4. Display attributes CS, HI, and BL can cause fields on the 5292, 3477 Model FC, 3487 Model HC, 3179, 3197 Model C1 and C2, and 3488 (5) color display stations to appear as color fields. See "COLOR (Color) Keyword" in topic 3.36 for more information. 5. If you are using an IBM Personal System/2* (PS/2)* computer that is emulating a 5250 display station and you are directly changing the EBCDIC screen buffer, you need to set the MDT attribute. See the IBM Personal Computer Enhanced 5250 Emulation Program Technical Reference manual for additional information. 6. If you are using a PS/2 computer and VGA monitor, the UL attribute does not work due to hardware specific limitations in the way buffers are used. Option indicators are valid for this keyword, except when the attributes OID or SP are the only display attributes specified. Detailed descriptions of each of the attributes follow the coding example and sample display provided in Figure 152.

33

How do I use activation groups? This answer is intended to be a starting point for ILE beginners. More than most FAQ answers, it is NOT intended to be the absolute, correct and only way to approach activation groups. This FAQ is a work in progress, based mostly on the RPG400-L archive thread at http://archive.midrange.com/rpg400-l/200203/threads.html#00130 Everyone reading this should have already read the ILE Concepts manual (V5R2): http://publib.boulder.ibm.com/iseries/v5r2/ic2924/books/c4156066.pdf and the RPG IV Redbook: http://www.redbooks.ibm.com/abstracts/sg245402.html The activation group concept is intended to be a way to subdivide a job into smaller portions, especially in the areas of overrides and static memory. An activation group does NOT span jobs! The corollary is that programs which run in the same AG are intended to be developed as a single cooperative application. Activation group strategies revolve around the various parameters on the CRTRPGMOD, CRTBNDRPG, CRTPGM and CRTSRVPGM commands: NAG - Named activation group. Includes *NEW - create a system-named group when the program is activated *CALLER - inherit the AG from the program that called this one DAG - Default activation group. Usually OPM, but can include *CALLER - inherit the AG from the program that called this one Remembering always that an AG is a subdivision of a job, why would we want to do such a thing? To answer that, we need to think about application design for a bit. In most OPM designs, a typical job might consist of a single CL program driver that calls several other CL programs. They, in turn might issue an override and call an RPG program, like this: CLMAIN CL01 OVRPRTF FORMTYPE() RPG01 /* Print selected records */ RPG02 /* Mark records for deletion */ CL02 OVRPRTF FORMTYPE() RPG03 /* Print totals */ RPG04 /* Clear totals for next month */ Let's modify RPG03 so that it uses a subprocedure. Because of that subprocedure, we can no longer use DFTACTGRP(*YES) [As Jon Paris often comments, we should really think of this parameter as OPMCOMPATIBILITYMODE.] What do we do? If we use *NEW, then RPG03 will create a new AG, run there and then the system will delete the AG when RPG03 completes. That's a fair amount of overhead considering that we don't _need_ the job subdivided. If give the activation group a custom name like BUCK, then we have two problems: 1) We have to know when to destroy AG(BUCK), because the system won't clean
34

it up for us. 2) We need to worry about name collisions. What other programs in this job might use AG(BUCK) and accidentally share overrides? What other programs NEED to share overrides? Do they have the same AG name? So while choosing a single named AG like your company name might seem like a good idea at first, you should think about using *CALLER. *CALLER allows you to compile and use subprocedures, but you don't have to worry about whay activation group to run in. Everything runs in the default AG, just as it always did! Programs running in the same AG are a designed to share resources. This strategy implies that you will never try to end the activation group (for instance with RCLACTGRP) and that you will never need to re-activate a program.

Service programs. Service programs are like "procedure libraries." They provide the ability to semi-dynamically load code at runtime. "Semi" because once a service program is activated, it stays activated until the AG or job ends. The implication is that you can't re-compile a program/ser vice program that runs in the DAG unless you get everybody out of it. That is, they won't see the change until their job (and DAG) ends. Once you've set up a few service programs, you'll eventually run into the scenario where you want the service program to be shared between G/L and A/R, but you want different overrides (or static memory) for each app. Now, you want to subdivide your job into different activation groups. You're an ILE programmer!

Enter *NEW/*CALLER. By compiling the first A/R program as AG(*NEW), a new AG will be created every time ARMENU runs in the job. All the subsequent programs will inherit that AG. YOUR overhead is reduced because you don't need to keep track of who is using what private AG name, and the system will clean up the AG once all the programs are done with it. MAINMENU DAG ARMENU *NEW 5F4A716B (system generated) ARINQ *CALLER 5F4A716B (inherited) CUSTSRVPGM *CALLER 5F4A716B (inherited) GLMENU *NEW 2A9F7E14 (system generated) GLINQ *CALLER 2A9F7E14 (inherited) CUSTSRVPGM 2A9F7E14 (inherited) You can see that the system created TWO separate AGs and thatthe service program CUSTSRVPGM keeps the A/R and G/L apps separated from each other. So, you can keep a counter rolling of the number of accounts viewed today, and the counter for A/R will be different from the one for G/L. This is impossible to do in OPM. This *NEW/*CALLER strategy is the consensus of the group at RPG400-L

35

a technique describing how to declare arrays in RPG IV that had a variable number of array elements. Arrays of this nature are referred to as dynamic arrays. This week, I am presenting a new technique for declaring dynamic arrays. This one does not have the comp lexities of the previous technique. A major shortcoming in the previous technique that I illustrated was the need to allocate and reallocate memory dynamically based on a mathematical equation (the number of desired elements multiplied by the length of a single element). In addition, the requirement that the number of array elements currently allocated had to be tracked by the program is undesirable. The technique was useable but not fun. This time, none of those shortcomings occur. About the only oddity is the use of a pointer, and the use of that pointer isn't complex at all. Here's the outline of this new technique: ? ? ? Declare the array with the BASED keyword. Get a pointer to a user space. Assign that pointer to the pointer in the BASED keyword. Other than that, you can use the array as if it were dynamic, because it is now automatically growing as you access elements in the array. So, if one time you access five elements and another time you access 5,000 elements, your program will work, and none of the allocate/deallocate issues exist. First things first. To create a dynamic array, you need to declare the array with the BASED keyword. Within the BASED keyword, specify the name of a field. The field name does not need to exist, and probably should not exit. If it does not exist, the RPG IV compiler automatically generates the correct declaration for it. If it does exist, it must be declared as a data type of pointer (*). The following Definition statement declares an array named DYNARR and specifies the BASED keyword. The BASED(pArr) keyword identifies the based-on pointer field. Since there is no explicit declaration for that field, RPG IV automatically declares one for you.
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ D dynArr S 200A Dim(32766) BASED(pArr)

The following two Definition statements have the same effect as the previous one; however, the pArr variable is explicitly declared on the first line. Therefore, the compiler does not need to declare one for you. This style is useful for more advanced programming in which, perhaps, you would leverage the pArr variable for more than one use, or you might use this style simply for completeness.
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ D pArr S * D dynArr S 200A Dim(32766) BASED(pArr)

Whenever a variable is declared and that declaration contains the BASED keyword (as in the examples above), the compiler does not allocate storage for the variable. That means that if you try to move something into DYNARR, you'll get a runtime error, because no storage has been allocated for the variable. When the BASED keyword is involved, you are telling the compiler that you will allocate the storage for the variable yourself. This could mean using the ALLOC/REALLOC opcodes or simply assigning the address of another variable to the pointer. See the example below.
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ 0001 D pData S * 0002 D Data S 32A BASED(pData) 0003 D Real S 128A .....C..n01..............OpCode(ex)Extended-factor2++++++++++++++++ 0004 C eval pData = %addr(Real)

36

In this example, the field DATA is declared as a 32-byte character field with the BASED keyword. The pointer field pDATA is explicitly declared on the prior statement. Initially, no storage is assigned to the pDATA pointer; therefore, the DATA has no storage associated with it. To assign a value to the pDATA pointer, and consequently to provide storage for the DATA variable, an assignment statement is used (line 4). The %ADDR built-in function returns the memory location (i.e., the address) of the field identified by its first parameter. An address is the only type of data that may be stored in pointers. Once this assignment is made, the data that has been allocated (automatically by the compiler) for the REAL variable is now also being used for the DATA variable. Overlapping fields? Yes. Notice the variance in the field lengths. The DATA field is 32 bytes long, whereas the REAL field is 128 bytes long. This is perfectly fine as long as the REAL field is at least as long as the DATA field. If the situation were reversed, however, you'd run into a problem if you attempt to access byte 33 of the 32byte field. User Spaces as Dynamic Arrays The safest way that I've found to dynamically allocate storage for a dynamic array is to not do it at all. That is, come up with a way to make the system safely and automatically allocate the storage for you. After all, isn't that the way a true dynamic array scheme would work if IBM did it for us? The big question is, however, what is there that would do such a thing? It occurred to me that a user space object (*USRSPC) could be just the right solution to this question. User space objects are what data areas are based on. Space objects have been on this system for over 25 years, and user spaces have been around for as long as the AS/400's been around and then some. So they are a pretty reliable object to use. By default, user space objects are fixed-size objects, just like a data area. However there are two interesting aspects of user space objects that help solve the dynamic memory problem. 1) User space objects have an attribute that controls whether the user space is fixed-length or variablelength. Changing that attribute to 1 causes the user space to become auto-extending. This means that if you create the user space with a length of 12 bytes and you attempt to read or write to byte 750, the underlying interface automatically extends the user space to at least 750 bytes. You do nothing special; it just happens! 2) Using the QUSPTRUS API, you can retrieve a pointer to a user space object that works and acts just like a pointer from the %ADDR built-in function. Given these two facts, it occurred to me that I could just do something like this:
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ 0001 D dynArr S 200A Dim(32766) BASED(pArray) .....C..n01..............OpCode(ex)Extended-factor2++++++++++++++++ 0002 C CallP QusPtrUs(szUS:pArray:apiError)

The field named szUS contains the name of the user space. The field name pArray is the return value that receives the pointer to the user space object's data, and apiError is the standard IBM-supplied API error data structure. With just two lines of code, you can declare and assign the storage for a dynamically sized array. The best part is that you don't have to worry about deallocating or freeing up the storage for the dynamic array when you finish. Create the user space in QTEMP and forget about it! The bad news is that if you now use something like the SORTA opcode, the entire array will be sorted and hence extend the user space up to the full size of the array. That may be OK if you're expecting that to happen, but you may get unwanted results if you expected it to only sort the elements with data in
37

them. Obviously, a full IBM-provided solution is needed, such as the rumored %SUBARR built-in function that may allow you to segment an array and work with the dynamically specified from and to elements. Odds and Ends The QUSPTRUS API is used to retrieve a pointer to the user space object's data. The APi can be easily called with the traditional CALL/PARM opcodes. But after all this is 2003, not 1983, so why not call it using a prototype? The source for the prototype follows:
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ D QusPTRUS PR ExtPgm('QUSPTRUS') D szUserspace 20A Const D pRtnPtr * D apierror 16A OPTIONS(*VARSIZE)

Remember, the parameter names on a prototype are just placeholders or comments. They are not field declarations. So it doesn't matter what you call them, but you should take advantage of the fact that they are not declarations and use them in lieu of comments. For example, "szUserSpace" helps to signify that the field is character and is supposed to contain the name of a user space. The apiError parameter is the standard API exception/error data structure. Unfortunately, the APIs lack consistency with respect to this data structure. Some of them require it to be passed as a parameter; on some, it is optional; and on others, there is an alternate format. For our purposes, the apiError data structure's format is declared as follows:
.....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ D apiError DS Inz D apiLen 10I 0 Inz(%size(apiError)) D apiRLen 10I 0 D apiMsgID 7A D apiResv1 1A Inz(X'00') D apiErrData 10A

A Full Example The example that follows can be compiled on your system and should provide you with an example of how this technique works. But first, you must create a user space with a relatively small size (larger size is OK, but we're only testing at this point). To create a user space, call the CrtUsrSpace procedure (included in the RPG ToolKit) from within your RPG IV progra m and specify a size of something like 32 bytes or so, as follows:
.....C..n01..............OpCode(ex)Extended-factor2++++++++++++++++ C Callp CrtUsrSpace(szUS : 32)

If you don't have the RPG ToolKit, you can key in and run the CL command and the RPG IV CPP listed in Figures 2 and 3 at the end of this article. That command performs the same function as the CrtUsrSpace procedure, but does it from within CL. Alternatively, you can call the QUSCRTUS API followed by the QUSCUSAT API to accomplish the same thing. Once the user space has been created, and the source member listed in Figure 1 is compiled, use the STRDBG command to set a break point on the last RETURN opcode (last line of code). Once the breakpoint is set, exit the debugger using F12 and then call the program. When the line containing the breakpoint is about to be run, the debugger will stop and display the source on the screen. At this point, place the cursor on the "dynArr(1700)" variable displayed in the source window and press F11. You should see "Hello World" in that element of the array. If you do the math, you'll see that the size of an array element (200 in the example in Figure 1) multiplied by the element number (1,700) comes to 340,000. This means that 340,000 bytes of storage would have been required in order for the array to successfully provide 1,700 elements. But since we are using a user space, we avoided any hand-coded allocation schemes and are indifferent about the number of elements we use.

38

When I created a 32-byte user space and ran this program, the size of the user space automatically grew to 344,064 on my machine.
DftActGrp(*NO) The following three lines are only used if the RPG Toolkit (www.rpgiv.com/toolkit) is installed. They are not needed to make this example work. DEFINED(RTK_TOOLKIT) TOOLKIT/QCPYSRC,space /ENDIF .....DName+++++++++++EUDS.......Length+TDc.Functions+++++++++++++++ QusPTRUS PR ExtPgm('QUSPTRUS') szUserspace 20A Const pRtnPtr * apierror Like(apiError) apiError apiLen apiRLen apiMsgID apiResv1 apiErrData DS Inz 10I 0 Inz(%size(apiError)) 10I 0 7A 1A Inz(X'00') 10A

USER SPACE NAME S 20A Inz('DYNORAMA QTEMP') DYNAMIC ARRAY (Note the "based" keyword) dynArr S 200A Dim(32766) BASED(pArr) .....C..n01..............OpCode(ex)Extended-factor2++++++++++++++++ eval *INLR = *ON you have the RPG ToolKit installed use it to create the user space object. you don't you need to create the user space before calling this example program. DEFINED(RTK_TOOLKIT) Callp CrtUsrSpace(szUS : 32) /ENDIF Get a pointer to the user space CallP QusPtrUs(szUS:pArr:apiError) if apiRLen > 0 apiMsgID DSPLY Something happened??? Maybe the user space does not. return endif At this point the array is mapped to a user space so we can use it just like any other array. eval dynArr(1700) = 'Hello World!' return

Figure 1: DYNOARR is a test program to prove dynamic array size theory. Create User Space Made Easy In order to create an extendable user space, two APIs must be called: QUSCRTUS (Create User Space) and QUSCUSAT (Change User Space Attributes). The QUSCRTUS API creates a fixed-length user space at the size specified and allows things like the object attribute and text to be applied. The QUSCUSAT API allows you to change some of the attributes of the user space, including the current size, the initial value (a single character repeated in each byte of the user space), and the extendability option. For some reason QUSCRTUS does not include a parameter that allows the extendability option to be specified, so QUSCUSAT must also be called. In the RPG ToolKit for OS/400, there are procedures that allow you to easily create, change, and delete user spaces from within RPG. In addition, there are extra commands included, such as CRTUSRSPC, DLTUSRSPC, and CHGUSRSPCA. To provide this capability, I have reproduced the CRTUSRSPC command here, along with the CPP. Essentially, I have expanded the code by removing the calls to the ToolKit procedures and replacing them with calls to the OS/400 APIs mentioned above. So the ToolKit is not required to create user spaces on your system. Listed in Figure 2 is the command definition source for the CRTUSRSPC CL command. The only required parameter is the first one, USRSPC (user space name). To test the dynamic array size theory, however,
39

you want to make sure you specify the size at something like 32 bytes, rather than the 32k default value. For example, the following CRTUSRSPC command creates a user space named DynoRama in QTEMP with a size of 32 bytes and makes it auto-extendable. CRTUSRPSC USRSPC(QTEMP/DYNORAMA) SIZE(32) AUTOEXT(*YES)

CMD PROMPT('Create User Space') /* Command processing program is RTKCRTUS */ PARM KWD(USRSPC) TYPE(QUAL) MIN(1) + PROMPT('User Space') QUAL TYPE(*NAME) MIN(1) EXPR(*YES) QUAL TYPE(*NAME) DFT(*CURLIB) SPCVAL((*LIBL) + (*CURLIB)) EXPR(*YES) PROMPT('Library') PARM KWD(SIZE) TYPE(*INT4) DFT(32766) REL(*GT 0) + PROMPT('Size') PARM PARM KWD(OBJATR) TYPE(*CHAR) LEN(10) EXPR(*YES) + PROMPT('Object attribute') KWD(AUTOEXT) TYPE(*LGL) RSTD(*YES) + DFT(*YES) SPCVAL((*YES '1') (*NO '0')) + EXPR(*YES) PROMPT('Auto extend') KWD(INZ) TYPE(*CHAR) LEN(1) RSTD(*NO) + DFT(*NULL) SPCVAL((*NULL X'00') + (*BLANK ' ')) EXPR(*YES) + PROMPT('Initialization character') KWD(AUT) TYPE(*CHAR) LEN(10) RSTD(*YES) + DFT(*LIBCRTAUT) SPCVAL((*LIBCRTAUT) + (*CHANGE) (*EXCLUDE) (*USE) (*ALL)) + EXPR(*YES) PROMPT('Authority') KWD(REPLACE) TYPE(*CHAR) RSTD(*YES) DFT(*NO) + SPCVAL((*NO) (*YES)) EXPR(*YES) + PROMPT('Replace') KWD(TEXT) TYPE(*CHAR) LEN(50) DFT(*BLANK) + SPCVAL((*BLANK ' ')) EXPR(*YES) + PROMPT('Text ''description''') KWD(DOMAIN) TYPE(*CHAR) RSTD(*YES) + DFT(*DEFAULT) SPCVAL((*DEFAULT) (*USER) + (*SYSTEM)) EXPR(*YES) PROMPT('Domain')

PARM

PARM

PARM

PARM

PARM

Figure 2: This is the command definition source for the CRTUSRSPC command. To compiler the command definition source listed in Figure 2, specify the following CRTCMD command: CRTCMD CMD(CRTUSRSPC) PGM(mylib/RTKUSRSPC) Be sure to replace MYLIB with the name of the library where you've compiled the RTKCRTUS program. The source code listed in Figure 3 is the CPP for the CRTUSRSPC command. The first few dozen lines are declarations, prototypes for the APIs that are called, and the procedure interface for the program itself. Note that I avoid using the outdated *ENTRY/PLIST opcodes and instead use a procedure interface. The RTKCRTUS program is fairly straight forward; it calls just two APIs: QUSCRTUS to create the user space and then QUSCUSAT to set the auto-extendability attribute for the user space. Before running the DYNARR program from Figure 1, be sure to compile and run the CRTUSRSPC command to create the user space.
DFTACTGRP(*NO) rtkcrtus szUserSpace nUSSize szExtAttr bAutoExtend InitValue szPubAut zReplace szText szDomain PR 20A 10I 0 10A 1N 1A 10A 10A 50A 10A

40

QusCRTUS UsrSpace ExtAttr nSize InitValue PubAuth szTextDesc Replace api_error szDomain QusCUSAT RtnLibName UsrSpace USAttr api_error rtkcrtus szUserSpace nUSSize szExtAttr bAutoExtend InitValue szPubAut szReplace szText szDomain apiError apiLen apiRLen apiMsgID apiResv1 apiErrText rtnLib

PR

ExtPgm('QUSCRTUS') 20A Const 10A Const 10I 0 Const 1A Const 10A Const 50A Const 10A Const Like(apiError) OPTIONS(*NOPASS) 10A Const OPTIONS(*NOPASS) ExtPgm('QUSCUSAT') 10A 20A 64A Const OPTIONS(*VARSIZE) Like(apiError)

PR

PI 20A 10I 0 10A 1N 1A 10A 10A 50A 10A DS Inz 10I 0 Inz(0) 10I 0 7A 1A Inz(X'00') 24A 10A

The QUSCUSAT data structure This one is setup up only to change the auto-extendibility option to '1'. erSpaceAttr DS ALIGN nRecdCount 10I 0 Inz(1) nAttrKey 10I 0 Inz(3) nAttrLen 10I 0 Inz(%Size(bExtend)) bExtend 1A Inz('1') eval Callp *INLR = *ON QusCRTUS(szUserSpace:szExtAttr: nUSSize : InitValue : szPubAut : szText : szReplace : apiError : szDomain )

if apiRLen = 0 and bAutoExtend Change the user space to AutoExtend CallP QusCUSAT(rtnLib : szUserspace : UserSpaceAttr : apiError) endif return

Figure 3: Here's the RPG IV source for the RTKCRTUS program of the CRTUSRSPC command.

Batch Job and LDA 1. Can you be a little more specific about where/when/how you want to retrieve this LDA data? Basically you can use the LDA in a batch job exactly the same as you would in an interactive job. You can use the change variable (CHGVAR) in CL and move the value from the LDA into a CL variable or you can define the LDA field positions in an RPG(LE) program using UDS and use them in the program. 2. We stored some values in LDA using our current job interactively. Then submitted a batch job. Now that batch job should retrieve the values from LDA of interactive job. Is it possible ? If yes How? Any one can provide me some solution for this. Thanks for your information. I understand that. But my question is as LDA is created for every job how can an interactive job's LDA information will be retrieved by Batch Job? For eg. When i have an interactive job assume the LDA
41

3.

is LDA1. When i submitted a batch job, LDA2 will be created for batch job. How can i retrieve the values from LDA1 in the batch job? Hope i am clear with my question this time. If my concept is wrong, please don't hesitate to correct me. Passing parameters from a Local Data Area (LDA) * post #5541 * CLKelly on 09/14/2004 "Since the LDA in its current state is passed with the submit job command, all you need is to retrieve the portions you need: CHGVAR VAR(&parm1) VALUE(%SST(*LDA 1 10)) (Where 1 is the starting position and 10 is the length.) CALL PGM(program)..." Author: ed Return to Forum 2005-06-07 09.03.31 You don't need to "pass" a data area to a batch job, unless you are using the *LDA, which Jean-Marc is correct, that is old 36 mentality. The *LDA will follow the job. A batch job can access a data area any where, at any time. Use the RTVDTAARA command in your CLLE and put into a variable defnied Author: Jean-Marc Return to Forum 2005-06-07 01.38.50 *LDA is a temporary data area attached to the job.It comes from the S36 environment history.The other data areas are normal objects, located in libraries. *LDA is transmitted when you submit a job. But why do you need a data area to transmit informations to the batch job ? Author: Doldrums Return to Forum 2005-06-07 01.12.45 When do we use *LDA data area & when do we use the other data area's. Also how do i pass a data area from an interactive job to a batch Job.

If you execute DO loop to 20001 instead of 50 times, this program will give error because array dimension is fixed at 20000. program is allocating more memory to array but it dimensions is fixed to 20000. I want to increase its dimension at run time from 20000. define the array as based. Here is an example. ? * array definitions Darray S 10 DIM(20000) BASED(PTR) Dindex s 7 0 ? * memory allocation data items Dptr S * Dnbr_of_elems S 5 0 INZ(10) Dmem_size S 7 0 INZ Dx S 10i 0 ? * allocate the initial memory heap (initial # of elements * the size of the array) C EVAL mem_size = %size(array) * nbr_of_elems C ALLOC mem_size ptr C EVAL x = %elem(array) ? * loop to test ? C 1 DO 50 index ? * does the index exceed the current # of array elements? ? C IF index > nbr_of_elems ? * recalculate the memory heap size by adding 10 to the number of elements ? * and multiplying the size of the array by the new number of elements. C EVAL nbr_of_elems = nbr_of_elems + 10 C EVAL mem_size = %size(array) * nbr_of_elems ? * reallocate the memory heap and increase the size C REALLOC mem_size ptr ? C ENDIF ? * move data for test C MOVE index array(index) ? * ? C ENDDO ? * deallocate the memory utilized
42

C C

DEALLOC EVAL

ptr *inlr = *on

How do I access another jobs QTEMP? From an email by Larry Ducie available in the rpg400 archives at: http://archive.midrange.com/rpg400l/200505/msg00297.html Chaps, I've just reviewed the code and it's more simple than I remembered - 4 CLLE programs and three commands. They're all very small so I'll post all 7 at the foot of this mail. (I hope you don't mind David - and apologies to all, but there's nothing worse than looking for something in the archives and all you find are references of source passed off-list) First - an overview of how it works: To get started quickly all objects must be compiled into library JOBCMDLIB. This is because it's hard-coded. But as you now have the source, you can do what you want. :-) Basically, it works by starting a job trace on the job you are investigating. When starting a trace you can specify an exit program. As this exit program is invoked WITHIN the environment of the job being traced it can be used as a hook within the job. Every time a traceable event occurs within the job it will call this program. By default this program marks the trace as processed and discards it - so an actual trace is not generated. You can optionally specify whether you wish to service this job - good for debugging. When you issue command STRJOBCMD it starts a trace, and registers program EXITTRC. It also creates a message queue in library JOBCMDLIB that uniquely references that job. Every time the exit program is called it looks for entries on that mesage queue and passes the command on it directly to QCMDEXC. This allows you to run commands within that job - using the job's user profile! When you issue command SNDJOBCMD it lets you create a command string and then passes it to the message queue created for the job you're tracing. Next time it generates a traceable event it executes your command. It's as simple as that! When you issue command ENDJOBCMD it ends the trace. You can specify whether you wish to end the service of the job too - good if you're servicing and debugging the job and wish to keep debugging. That's it. To create the commands:

create a library called JOBCMDLIB compile the four CLLE programs into library JOBCMDLIB. compile the three commands into library JOBCMDLIB, specify the CLLE of the same name as the processing program. You will also need to create a message queue called CMDLOG in library JOBCMDLIB. This will log all commands sent to jobs via this route. This is important as there is practically no way of knowing somebody is doing this. You can piggyback one job after another and then it'd be almost impossible to trace the fact that actions invoked within one job was actually caused by another user running another job. Finally, here's a good test:

Start two sessions. Leave session 1 on a command line. Add JOBCMDLIB to the library list of session 2 and issue command STRJOBCMD - enter the job details of session 1, and opt to service the job.

43

In session 2 issue command SNDJOBCMD and place the job details of session 1 again. This time type a call command to your favourite interactive screen program. Simply press Enter in session 1. The program should be called, and the screen should appear. Have fun! (I've copy 'n' pasted the source into the mail so I hope it formats OK). I'll try and set up a savf to download on my website. Cheers Larry Ducie Source: 1) STRJOBCMD - CLLE 2) SNDJOBCMD - CLLE 3) ENDJOBCMD - CLLE 4) EXITTRC - CLLE 5) STRJOBCMD - CMD 6) SNDJOBCMD - CMD 7) ENDJOBCMD - CMD
-------------------------------------------------------------------------------1) STRJOBCMD - CLLE /* **************************************************************** */ PGM PARM(&JOB &SRVJOB) DCL DCL DCL DCL DCL DCL VAR(&JOB) VAR(&NAME) VAR(&USER) VAR(&NUMBER) VAR(&SRVJOB) VAR(&MSGQ) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) LEN(26) LEN(10) LEN(10) LEN(6) LEN(1) LEN(10) */

/* Extract job details... CHGVAR VAR(&NAME) VALUE(%SST(&JOB 1 10)) CHGVAR VAR(&USER) VALUE(%SST(&JOB 11 10)) CHGVAR VAR(&NUMBER) VALUE(%SST(&JOB 21 6)) /* Service job..? IF STRSRVJOB

*/ COND(&SRVJOB *EQ 'Y') THEN(DO) JOB(&NUMBER/&USER/&NAME) */

/* Job does not exist... MONMSG MSGID(CPF3520) EXEC(DO) SNDPGMMSG MSG('The job you are trying to send a + command to does not exist') TOPGMQ(*PRV) GOTO CMDLBL(END) ENDDO

/* Job already being serviced... */ MONMSG MSGID(CPF3501) EXEC(DO) SNDPGMMSG MSG('Job is already being serviced, traced + or debugged') TOPGMQ(*PRV) GOTO CMDLBL(END) ENDDO

/* Already servicing another job... */ MONMSG MSGID(CPF3938) EXEC(DO) SNDPGMMSG MSG('You are already servicing another job') + TOPGMQ(*PRV) GOTO CMDLBL(END) ENDDO /* General errors... MONMSG */ MSGID(CPF3500) EXEC(DO) 44

SNDPGMMSG

GOTO ENDDO ENDDO

MSG('An error occurred when trying to + service the job. See joblog for + details.') TOPGMQ(*PRV) CMDLBL(END)

/* Create message queue... CHGVAR VAR(&MSGQ) VALUE('SRVJ' *CAT &NUMBER) CRTMSGQ MSGQ(JOBCMDLIB/&MSGQ) MONMSG CPF9999 /* Set trace... TRCJOB MONMSG SNDPGMMSG

*/

*/ EXITPGM(JOBCMDLIB/EXITTRC) CPF9999 EXEC(DO) MSG('An error occurred when trying to + trace the job. See joblog for + details.') TOPGMQ(*PRV) COND(&SRVJOB *EQ 'Y') THEN(DO) MSGID(CPF9999) MSGQ(JOBCMDLIB/&MSGQ) CPF9999 CMDLBL(END)

IF ENDSRVJOB MONMSG ENDDO DLTMSGQ MONMSG GOTO ENDDO END: ENDPGM

-------------------------------------------------------------------------------2) SNDJOBCMD - CLLE /* **************************************************************** */ PGM PARM(&JOB &CMD) DCL DCL DCL DCL DCL DCL VAR(&JOB) VAR(&NAME) VAR(&USER) VAR(&NUMBER) VAR(&CMD) VAR(&MSGQ) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) LEN(26) LEN(10) LEN(10) LEN(6) LEN(512) LEN(10) */

/* Extract job details... CHGVAR VAR(&NAME) VALUE(%SST(&JOB 1 10)) CHGVAR VAR(&USER) VALUE(%SST(&JOB 11 10)) CHGVAR VAR(&NUMBER) VALUE(%SST(&JOB 21 6)) /* Send command to message queue... CHGVAR VAR(&MSGQ) VALUE('SRVJ' *CAT &NUMBER) SNDPGMMSG MSG(&CMD) TOMSGQ(JOBCMDLIB/&MSGQ) MONMSG MSGID(CPF2469) EXEC(DO) SNDPGMMSG MSG('An error occured while sending the + command. Please re-enter the details.') GOTO CMDLBL(END) ENDDO END: ENDPGM

*/

-------------------------------------------------------------------------------3) ENDJOBCMD - CLLE /* **************************************************************** */ PGM PARM(&JOB &SRVJOB) DCL DCL DCL DCL DCL DCL VAR(&JOB) VAR(&NAME) VAR(&USER) VAR(&NUMBER) VAR(&SRVJOB) VAR(&MSGQ) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) LEN(26) LEN(10) LEN(10) LEN(6) LEN(1) LEN(10) */ VALUE(%SST(&JOB 1 10)) 45

/* Extract job details... CHGVAR VAR(&NAME)

CHGVAR CHGVAR /* End trace... TRCJOB MONMSG /* End service job... IF ENDSRVJOB MONMSG ENDDO

VAR(&USER) VALUE(%SST(&JOB 11 10)) VAR(&NUMBER) VALUE(%SST(&JOB 21 6)) */ SET(*END) CPF9999 */ COND(&SRVJOB *EQ 'Y') THEN(DO) CPF9999

/* Delete message queue... CHGVAR VAR(&MSGQ) VALUE('SRVJ' *CAT &NUMBER) DLTMSGQ MSGQ(JOBCMDLIB/&MSGQ) MONMSG CPF9999 ENDPGM

*/

-------------------------------------------------------------------------------4) EXITTRC - CLLE PGM PARM(&TRCDTA) DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL VAR(&TRCDTA) VAR(&CMD) VAR(&LEN) VAR(&MSGTXT) VAR(&NAME) VAR(&USER) VAR(&NUMBER) VAR(&MSGQ) VAR(&SENDER) VAR(&LOG) TYPE(*CHAR) TYPE(*CHAR) TYPE(*DEC) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) TYPE(*CHAR) LEN(1024) LEN(1024) LEN(15 5) VALUE(512) LEN(512) LEN(10) LEN(10) LEN(6) LEN(10) LEN(80) LEN(512) */ ')

/* Set trace record as processed... CHGVAR VAR(&TRCDTA) VALUE('

/* Retrieve job attributes... RTVJOBA JOB(&NAME) USER(&USER) NBR(&NUMBER) /* Receive message from queue... CHGVAR VAR(&MSGQ) VALUE('SRVJ' *CAT &NUMBER) RCVMSG MSGQ(JOBCMDLIB/&MSGQ) MSG(&MSGTXT) + SENDER(&SENDER) MONMSG MSGID(CPF9999) EXEC(GOTO CMDLBL(END)) /* Set command... CHGVAR

*/

*/

*/ %SST(&CMD 1 1024) %SST(&MSGTXT 1 &LEN) */

/* Send log messages, if message received... IF COND(&MSGTXT *NE ' ') THEN(DO) CHGVAR VAR(&LOG) VALUE(&SENDER *CAT &NAME *CAT + &USER *CAT &NUMBER) SNDPGMMSG MSG(&LOG) TOMSGQ(JOBCMDLIB/CMDLOG) CHGVAR VAR(&LOG) VALUE(&CMD) SNDPGMMSG MSG(&LOG) TOMSGQ(JOBCMDLIB/CMDLOG) MONMSG CPF0000 /* Call QCMDEXC to process command... CALL PGM(QCMDEXC) PARM(&CMD &LEN) MONMSG CPF0000 ENDDO END: ENDPGM

*/

-------------------------------------------------------------------------------5) STRJOBCMD - CMD CMD PROMPT('Start Job Command Processing') 46

PARM

KWD(JOB) TYPE(Q1) MIN(1) PROMPT('Job name . + . . . . . . . . . .')

KWD(SRVJOB) TYPE(*CHAR) LEN(1) RSTD(*YES) + DFT(Y) VALUES(Y N) CHOICE('(Y/N)') + PROMPT('Service Job . . . . . . . . .') /*********************************************************************/ Q1: QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('Name') QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('User') + PROMPT('User . . . . . . . . . . . . .') QUAL TYPE(*CHAR) LEN(6) RANGE(000000 999999) + MIN(1) CHOICE('000000-999999') + PROMPT('Number . . . . . . . . . . . .')

PARM

-------------------------------------------------------------------------------6) SNDJOBCMD - CMD CMD PROMPT('Send Job Command') PARM KWD(JOB) TYPE(Q1) MIN(1) PROMPT('Job name . + . . . . . . . . . .') KWD(CMD) TYPE(*CMDSTR) LEN(512) MIN(1) + CHOICE('Command') + PROMPT('Command . . . . . . . . . . .')

PARM

/*********************************************************************/ Q1: QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('Name') QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('Name') + PROMPT('User . . . . . . . . . . . . .') QUAL TYPE(*CHAR) LEN(6) RANGE(000000 999999) + MIN(1) CHOICE('000000-999999') + PROMPT('Number . . . . . . . . . . . .')

-------------------------------------------------------------------------------7) ENDJOBCMD - CMD CMD PROMPT('End Job Command Processing') PARM KWD(JOB) TYPE(Q1) MIN(1) PROMPT('Job name . + . . . . . . . . . .') KWD(SRVJOB) TYPE(*CHAR) LEN(1) RSTD(*YES) + DFT(Y) VALUES(Y N) CHOICE('(Y/N)') + PROMPT('End service job . . . . . . .')

PARM

/*********************************************************************/ Q1: QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('Name') QUAL TYPE(*NAME) LEN(10) MIN(1) CHOICE('User') + PROMPT('User . . . . . . . . . . . . .') QUAL TYPE(*CHAR) LEN(6) RANGE(000000 999999) + MIN(1) CHOICE('000000-999999') + PROMPT('Number . . . . . . . . . . . .')

Understanding Object Authorities Introduction To maintain security of data and/or program objects the AS/400 offers a variety of options available to limit access to object. These authorities must be set to secure object to the level of security required. Likewise, if objects are to be shared or used between users, the object authorities must be relaxed correctly to maintain object integrity. This section is designed to help users maintain correct authorities and to understand the authorities on the objects that they own. Authorities and their meanings Object Authorities Object authority is used to control access to an object including the ability to see an object description, control read and write access to an object, or control an object's existence. *OBJMGT

47

provides the authority to specify the security (grant/revoke object authority), move or rename the object, and add members to database file. *OBJEXIST provides the authority to control the object existence and ownership. The user with this authority can delete, save, and transfer ownership of the object. *OBJOPR provides the authority to look at the description of an object and use the object as determined by the data authority that the user has to the object. Data Authorities Data authority is the authority to access data contained in an object, for example records in a database file. This includes the ability to view, update, add, or delete records. *READ provides the authority to get the contents or an entry in an object or to run a program. *ADD provides the authority to add entries to an object. *UPD provides the authority to change the entries in an object. *DLT provides the authority to remove entries from an object. Combinations of Object and Data Authorities These are keywords, each representing predefined combination of object and data authorities. They reduce the time required to assign specific authorities to users. *ALL allows the user to perform all authorized operations (object and data) on the object. *CHANGE provides *OBJOPR authority and all data authority. *USE provides *OBJOPR authority and data read authority. *EXCLUDE authority prevents the user from accessing the object even if *PUBLIC is authorized. In addition to these, users can create customized combinations of object and data authorities.

Changing authorities with EDTOBJAUT We use an example here to illustrate the use of some of the types of authorities discussed above. In this example, we want to allow a certain user to copy a member from the file "SRCFILE" which is stored in the library "YOURLIB". First of all, we need to allow the user to have access to the library "YOURLIB". To do that, we use the "Edit Object Authority", EDTOBJAUT, command to edit the authority on "YOURLIB". (Note that your default library, i.e. the library that has the same name as your user profile, is normally owned by your security officer so you cannot change its authorities). Type EDTOBJAUT on a command line and press <F4>. Fill in the blanks for object, library, and object type (*LIB) and press <Enter> . Edit Object Authority (EDTOBJAUT) Type choices, press Enter. Object . . . . . . . . . . . . . > YOURLIB Library . . . . . . . . . . . *LIBL Object type . . . . . . . . . . > *LIB

Name Name, *LIBL, *CURLIB *ALRTBL, *AUTL, *CFGL...

To see the detail screen as shown below, press <F11>. Note that the owner of "YOURLIB" has *ALL authority on the object. Edit Object Authority Object . . . . . . . : Library . . . . . : YOURLIB QSYS Object type Owner . . . . : *LIB JOHNDOE

. . . . . . . :

Type changes to current authorities, press Enter. Object secured by authorization list Object ----Object----. . . . . . . . . . . .: ----------Data----------*NONE

48

User JOHNDOE *PUBLIC

Authority *ALL *EXCLUDE

Opr X _

Mgt X _

Exist X _

Read X _

Add X _

Update Delete X X _ _

F3=Exit F5=Refresh F11=Nondisplay detail

F6=Add new users F12=Cancel

F10=Grant with reference object F17=Top F18=Bottom

Press <F6> to add a user to the list of users authorized to this object. Type in the name of the user and *USE for the object authority. Press <Enter> to return to the previous screen. Notice that *USE gives the user *OBJOPR and *READ authorities on "YOURLIB". (Note: If you want to edit a specific authority, type "X" in the position relating to that authority to grant authority or a space to delete that authority.) Next, we need to allow the user access to the file "SRCFILE". Use EDTOBJAUT to edit the authority on the file "SRCFILE". Type EDTOBJAUT OBJ(YOURLIB/SRCFILE) OBJTYPE(*FILE) or use the prompt to fill in the parameters. Press <F6> to add the user to the authorization list with *USE authority. This will allow them to do perform various operations on "SRCFILE" including copying members from the file. To allow them to copy the entire file (i.e. "SRCFILE"), *OBJMGT must be granted. To do that, type "X" under "Mgt" in the detail screen for that user. Note that the object authority changes from *USE to USER DEF (meaning a customized authority).

49

Changing Authorities with GRTOBJAUT and RVKOBJAUT To use GRTOBJAUT and RVKOBJAUT type the command and prompt <F4>. Fill in the library name, object name, object type along with the user you are granting authorities and the respective authority being granted. At any time press <F1> for more help. Sending and Receiving Network Files Users can send and receive network files to and from each other. The "Send Network File" (SNDNETF) command can be used to send a member of a physical database file (PF-DTA or PF-SRC) to another user. In the example shown below, the member "SNDMBR" of the physical database file "SNDFILE" (which is contained in the library "SNDLIB") is to be sent to the user "RCV". "MKTAS400" is the address of the AS/400 at Minnesota State University, Mankato. When the network file arrives at its destination, a message is sent to both the sender and receiver. Send Network File (SNDNETF) Type choices, press Enter. File . . . . . . . . . . . . . . > SNDFILE___ Library . . . . . . . . . . . > SNDLIB____ User ID: _ User ID . . . . . . . . . . . > RCV_______ Address . . . . . . . . . . . > MKTAS400__ + for more values _ Member . . . . . . . . . . . . . > SNDMBR____

Name Name, *LIBL, *CURLIB Character value Character value Name, *FIRST

Additional Parameters To file type . . . . . . . . . . *FROMFILE_ *FROMFILE, *DATA VM/MVS class . . . . . . . . . . Send priority . . . . . . . . . F3=Exit F4=Prompt F24=More keys F5=Refresh A *NORMAL__ F12=Cancel A, B, C, D, E, F, G, H, I *NORMAL, *HIGH F13=How to use this display

The receiver will have to run the "Work with Network Files" (WRKNETF) command to inspect their network files. Work with Network Files (WRKNETF) User . . . . . . . . . . . . : User ID/Address . . . . . . : Type options, press Enter. 1=Receive network file 3=Submit job 4=Delete network file 5=Display physical file member File -------From----------Arrival---Opt File Member Number User ID Address Date Time __ SNDFILE SNDMBR 1 SENDER MKTAS400 08/26/92 16:37 RCV_______ RCV_______

MKTAS400

F3=Exit F4=Prompt F12=Cancel

F5=Refresh

F9=Retrieve

F11=Display type/records

Type 1 in the "Opt" blank in front of the network file to receive and press <F4>to prompt. The following screen will show up. Receive Network File (RCVNETF) Type choices, press Enter. From file . . . . . . . . To data base file . . . . Library . . . . . . . . Member to be received . . To member . . . . . . . .

. . . . .

. . . . .

. > 'SNDFILE'__ Character value . *FROMFILE__ Name, *FROMFILE . *LIBL____ Name, *LIBL, *CURLIB . > 'SNDMBR'__ Character value, *ONLY . *FROMMBR____ Name, *FROMMBR, *FIRST F10=Additional parameters F24=More keys F12=Cancel

F3=Exit F4=Prompt F5=Refresh F13=How to use this display

Fill in the "To data base file", "Library", and "To member" blanks with the appropriate receiving file, library and member names and press <Enter>. Note that the receiving file must already exist before trying to receive members. 50

Question: Can anyone tell me how to edit / change the signon screen on the as400. All I need to do is to add a sentence to the bottom, company warnings about mis-use etc. Cheers Answer(s):

usually, you can find member QDSIGNON in file QDDSSRC in library QGPL. Copy this member to new one and edit by SEU or SDA. After creating new object you have to change subsystem description (usually QINTER) - CHGSBSD - keyword SGNDSPF.

I just did this for our AS/400 and added a "security" message and the company logo on it. What you need to do is get the QDSIGNON source and edit it to show/say what you want. then for any subsystem you want this to show up on (don't use QCTL so you can at least get into your console) you will have to do a ENDSBS on the subsystem. Then compile the DDS source into a library other than QSYS I put mine in QGPL. Then STRSBS on the subsystems you want the QDSIGNON used in. Then do a CHGSBSD SBSD(QINTER) SGNDSPF(QGPL/QDSIGNON). (Change SGNDSPF to whatever Subsystem(s) you want to use the new signon screen. If you have any problems let me know.

The source file member is QDSIGNON in QGPL/QDDSSRC. Do not change the order of inout capable fields or remove them. If you only want users to be able to enter User ID and Password, you can protect and hide the other fields in the changed DDS. Compile the source into one of your libraries and then change the sub system description for the subsystem such as QINTER by using WRKSBSD. I would advise that you do not change your controlling subsystem just in case! You can add many lines of output text and you can move the positions of input capable fields providing you do not alter their sequence in the DDS.

go to http://as400bks.rochester.ibm.com/bookmgr/home.htm and look up the the book OS/400 Work management there you wil find some information on changing QDSIGNON. The source of QDSIGNON is shipped in QGPL/QDDSSRC. I copied the source and added the following lines: A MSG001 79 O 11 2MSGID(S000001 SIGNON) A MSG002 79 O 12 2MSGID(S000002 SIGNON) A MSG003 79 O 13 2MSGID(S000003 SIGNON) A MSG004 79 O 14 2MSGID(S000004 SIGNON) A MSG005 79 O 15 2MSGID(S000005 SIGNON) A MSG006 79 O 16 2MSGID(S000006 SIGNON) A MSG007 79 O 17 2MSGID(S000007 SIGNON) A MSG008 79 O 18 2MSGID(S000008 SIGNON) create a MSGF SIGNON and add the MSGID's with your text. When creating the sign-on display file with the Create Display File (CRTDSPF) command, secify 256 on the MAXDEV parameter. I created QDSIGNON in QGPL, and changed the SBSD QINTER to look at QGPL/QDSIGNON. I would recommmend not to change the controling subsystem. HTH

How you make use of this powerful ILE feature can greatly impact the performance and flow of your ILE

Published June 2005

51

An ILE activation group is a substructure of a job. It is used to allocate and handle resources used by the programs running within the activation group. Activation groups are a vital component of ILE programming. In this article, we'll explore what activation groups are and how they affect the way your ILE programs run. Program Activation To understand the activation group concept, you have to first understand what ILE program activation is. Any ILE program or service program must be activated before the program is run. Activation initializes resources used by the program, including static variables, open files, SQL cursors, and open files. The activation process also handles binding of programs to associated service programs. The process of activating a program needs to occur only once within a given activation group. When a program is called, if it is not activated, program activation will occur. If, on the other hand, that program has already been activated, the existing activation is used. When a program is activated, any static variables are initialized. Once a program has been activated, these variables remain available for access within the given activation group. It's important to remember that each job running a program has its own copy of each of these static variables. This means that if two users execute the same program, the static variables within each will be unique. Activation Group Options When determining what activation group a program will belong to, you have several options. The default activation group is automatically created when any job starts and destroyed when that job ends. While you can create ILE programs using the default activation group, you really lose much of the functionality that you use ILE for. For example, an ILE RPG program compiled to use the default activation group cannot contain any subprocedures. The default activation group is used by all non-ILE (OPM) programs. When compiling an ILE RPG program, this option is specified on the DFTACTGRP parameter. Valid values are *YES to use the default activation group and *NO to define the activation group to be used. When *NO is specified, additional parameters for the activation group and binding directory to be used are displayed. You have several options when specifying the activation group: You can specify a named activation group that you've defined for the program, you can specify the special value *NEW to create a new activation group, or you can use *CALLER to identify that the program being compiled should always run in whatever activation group the program calling it is running in. With this last option, it is possible to have a true ILE application exist in the default activation group. The default value for this parameter is the QILE named activation group. Each of these has its own merits and purpose. Here's a breakdown of the life cycle of each type of activation group. ? ? ? Named Activation Group--When a program with a named activation group is called, if the activation group does not exist, it is created. It remains in existence until any and all programs using that activation group are no longer active. *NEW Activation Group--A program that was compiled with an activation group of *NEW creates a new activation group each time the program is called. This newly created activation group exists until the program that created it is no longer active. *CALLER Activation Group--When *CALLER is specified, the activation group is already in existence when the program is called and will continue to exist until the activation group is deleted based on one of the two scenarios described above. It's important to mention that the *NEW option is not available when creating a service program using the CRTSRVPGM command. The other two options, however, are both valid on that command. This is because the general idea behind a service program is that it will be used by many other programs. Creating a new activation group each time the service program is accessed wouldn't make much sense. It's also important to note that a program within a given activation group can remain active even after the program has ceased execution. This can be accomplished in RPG, for example, by executing a RETURN statement without first turning on *INLR. In this circumstance, the activation group containing the program will remain in existence until the job under which the activation group has been created ends.
52

Activation Group Resources As I mentioned, static variables keep their values as long as an activation group exists. In addition, open files remain open in their current state until their activation group is deleted. Static (or global) variables are either those defined within the main procedure of a program or those defined in subprocedures with the STATIC keyword. As you've already learned, these variables hold their value on concurrent calls to the same program. The source shown in Figure 1 is a simple ILE RPG program that can be used to illustrate static variables. -----------------------------------------------------------Program: AGR001RG Description: Sample of Global Variables Compile Command: CRTBNDRPG PGM(xxx/AGR001RG) SRCFILE(xxx/QRPGLESRC) SRCMBR(AGR001RG) DFTACTGRP(*NO) ACTGRP(TEST) -----------------------------------------------------------DAGR001RG PR Action 1 DAGR001RG PI Action 1 Variable1 S 5 0 C/FREE Select; When Action = 'A'; Variable1 = Variable1 + 1; When Action = 'S'; Variable1 = Variable1 - 1; When Action = 'X'; *INLR = *ON; Return; EndSl; Dsply Variable1; Return; /END-FREE Figure 1: This program helps illustrate the use of global variables. You'll notice that the compile command shown creates a named activation group called TEST. When this program is called, if the activation group TEST doesn't exist, it will be created. Since this program contains only a single procedure, all of this program's variables are global. The ACTION parameter defines the action to be performed by the program. A value of 'A' tells the program to add 1 to Variable1. An action of 'S' indicates that the program should subtract 1 from Variable1. When 'X' is specified for the action, it instructs the program to turn on *INLR, which causes the global variables to be reset. If the program is called again with one of the other action values after it is called with 'X', the value of Variable1 will be re-initialized. Our TEST activation group, however, will continue to remain in existence until it is reclaimed using the RCLACTGRP command. This can be seen by looking at the Display Activation Group screen, option 18 from the WRKJOB menu, as shown in Figure 2.

53

Figure 2: Any activation groups currently in existence are displayed on this screen. (Click image to enlarge.) Similarly, opened files are kept open as long as the program is active and as long as the activation group remains in existence. The program shown in Figure 3 is an example of a program that accesses a database resource. -----------------------------------------------------------Program: AGR002RG Description: Sample for Activation Groups Compile Command: CRTBNDRPG PGM(xxx/AGR002RG) SRCFILE(xxx/QRPGLESRC) SRCMBR(AGR002RG) DFTACTGRP(*NO) ACTGRP(TEST) -----------------------------------------------------------FCUSTOMERS IF E DISK USROPN C/FREE If Not %Open(CUSTOMERS); Open CUSTOMERS; EndIf; Read CUSTOMERS; If %EOF; *INLR = *ON; Return; EndIf; Dsply CUSNAME; Return; /END-FREE Figure 3: This program illustrates file use within an activation group. Once again, this application is compiled to use the TEST named activation group. Each time the program is called, the program reads a record from the file CUSTOMERS and displays the value of the field CUSNAME. Once an end-of-file condition has been achieved, the program turns on *INLR. In either of these examples, if the Reclaim Activation Group (RCLACTGRP) command is issued, any opened file pointers and static/global variables will be reset. If, for example, we called AGR002RG and then the RCLACTGRP was executed, the CUSTOMERS table would be closed. At this point, the activation group no longer appears in the Display Activation Groups screen. If AGR002RG is called again, the
54

program will again create the activation group and will start with the first record in the CUSTOMERS table. The same would be true of the global variable used in AGR001RG. When the activation group is reclaimed, the global variable will be reset, and a subsequent call to the program will recreate the activation group with newly initialized global variable values. Similarly, the Reclaim Resources (RCLRSC) command can be used for programs running in the default activation group. These can be either OPM programs or ILE programs compiled with the option DFTACTGRP(*YES). The two parameters on the RCLRSC command are used to define the call level at which the cleanup should occur and to indicate whether an abnormal close notification should be sent to open communication files. Below is the syntax for the RCLRSC command.
RCLRSC LVL(*/*CALLER) OPTION(*NORMAL/*ABNORMAL)

The call level (LVL) parameter has an asterisk (*) option to identify that open resources at the current level or greater should be reclaimed. *CALLER can be specified to reclaim all resources at the level of the program that called the program issuing the RCLRSC command. Similarly the RCLACTGRP command accepts two parameters; however, the first parameter on this command is used to identify the activation group to be reclaimed. Below is the syntax for the RCLACTGRP command.
RCLACTGRP ACTGRP(*ELIGIBLE/Act Grp Name) OPTION(*NORMAL/*ABNORMAL)

The ACTGRP parameter is used to specify the name of the activation group to be reclaimed. The optional special value *ELIGIBLE can be specified to reclaim all eligible activation groups (that is, activation groups that are no longer in use). The OPTION parameter on this command not only handles sending an abnormal close notification to open communication files, but also determines whether to commit or roll back pending changes for an activation group level commitment definition.
Calling a C program from an RPG program Jim, Here's something I put in TechTalk a while back. This is RPG calling C's log function (natural logarithm.) The BNDDIR in the compilation instructions makes the linker find the function. If you're writing your own functions in C, you won't have to use this bindind directory. I have more examples, but I will have to dig them up & I'm fighting some deadlines. If anybody else has an example, feel free to jump in. HTH.

*=============================================================== * To compile: * * CRTBNDRPG PGM(XXX/YYY) SRCFILE(ZZZ/QRPGLESRC) + * DFTACTGRP(*NO) BNDDIR(QC2LE) * *=============================================================== D Log pr 8f extproc('log') D Arg 8f value D D X s 8f C eval x = log(1000.0) C eval *inlr = *on

Secrets of IPLs Exposed


BRIAN OGARA - 01:01am May 1, 1999 PST Performing an IPL on the AS/400 is one of those necessary tasks we all must tackle. Find out what goes on during an IPL and pick up some clues for getting the most out of it for your machine. As you sit around waiting for your AS/400 to finish an IPL, have you ever wondered what really happens inside your computer? Most of us have seen the system reference codes (SRC) on the front panel of the AS/400 change as the IPL is taking place, but exactly what do those codes tell us? Wouldnt it be nice to
55

know whether the IPL is almost complete? That way youd know if you could leave and do something more interesting or whether you should settle in with your bag of chips and Jolt cola. This article gives you some insight into what happens inside your AS/400 during an IPL and introduces you to the Change IPL Attributes (CHGIPLA) command, which helps customize the IPL to meet your shops needs.

What Does an IPL Do?


The easiest way to explain the IPL process is to break it into groups of related tasks. In brief, an IPL does the following tasks: Executes power on self-test and basic assurance tests of the input/output processors (IOPs) Runs diagnostics on the service processor and initializes the licensed internal code Initializes the system with LIC Displays the Attended IPL menu or Install System menu on the system console Executes storage management recovery, journal synchronization, and IPL cleanup Loads OS/400 Now, lets look at each main process in detail and list the SRC codes displayed for each. I gleaned the information in this article from my AS/400 Model 50S; the SRC codes could be different depending on which AS/400 model you use. (Any Xs in the codes indicate that multiple SRCs appear during that particular task. The preliminary procedure in an IPL merely verifies that the system unit and control panel power supplies are operational. The IPL performs these tests before any SRCs are displayed.)

Service Processor Reference Codes


(LIC) The first main function performed after the power supplies are tested is service processor testing, represented by SRC codes C1XX BXXX. The service processor card contains a set of instructions that constitute the logic required to start the system processor and handle the error messages that may occur during initialization. Here are the SRC codes that fall into this category: C100 B1D2Basic assurance Read-only Storage (ROS) testing on the control panel interface. The system first tests the control panel, and if the control panel is not functioning correctly, the IPL cannot continue and terminates. Because this testing requires little time, you may not even see this SRC. C10X B111Basic assurance ROS testing on Multifunction Input/Output Processor (MFIOP) control storage. As its name suggests, the MFIOP is a multifunction card in the system. Devices that can be attached to the MFIOP vary slightly, depending on the model of AS/400 used, but generally, the MFIOP supports internal tape drives, internal disk units, and the primary workstation controller. During this step, only the control storage portion of the MFIOP is tested. C100 B1E9Basic assurance ROS testing on service processor registers. Registers are storage areas in which data and addresses are held temporarily while being used by a processor. C1XX B18XBasic assurance testing on MFIOP. Those functions on the MFIOP card other than control storage are now tested. Depending on the size and type of AS/400 you have, these tests require 1 to 5 minutes. After the service processor has been tested, the LIC must be loaded onto it. As the LIC loads, SRCs C1XX XXXX are displayed. C1XX 1030Loading of the service processor LIC from the load source device. A partial IPL is performe d on the system bus, and the load source IOP is initialized. Basic assurance tests are performed for a second time on the I/O devices, and the LIC is loaded onto the service processor using the load source device, an internal disk drive that contains all of the LIC and operating system.

56

System Processor Reference Codes


With the service processor tests completed and the service processor loaded, diagnostics are now run on the system processor. These diagnostics are represented by SRCs C3XX 41XX: C320 4135 through C32A 4135. C320 4136Array Built-in Self-test (ABIST) on the system processor. These tests may differ based on the type of processor installed. Tests performed on single processors differ from those performed on multiple processors. C320 4190 through C32A 4190Main storage diagnostics (MSD). (Where information is lean, I was unable to determine exactly what type of diagnostics are performed. IBM does not share that information with the general public. In addition, for any unidentified acronym, I simply listed whatever information was in the IBM manual.) This stage of the IPL varies, depending on the processor type and type of IPL performed. On average, this stage requires 2 to 10 minutes. After the system processor diagnostics have been completed, you may see C100 2060 (a tape-read command issued to the alternate IPL tape device) and C100 2090 (acknowledgement from the alternate IPL tape device).

System Initialization
The hardware has been tested, and C100 2034 is displayed. At this point, IPL control is passed to the system processor, which continues the IPL process. The next stage of the process is the testing and initialization of the system configuration, represented by SRCs C6XX 4XXX: C600 4001Start static paging. C600 4002Start limited paging/call LID manager. C600 4003Initialize IPL termination data area/set up node address communication area (NACA) pointer. C600 4004Check and update MSD subject identifier (SID). The SID is a string that identifies a user or set of users in the distributed computing environment (DCE), a set of services that support the development, use, and maintenance of distributed applications. C600 4005Initialize event manager. C600 4006IPL all buses. The AS/400 supports different bus structures, two of which are Peripheral Component Interconnect (PCI) and System Products Division (SPD). PCI is growing more popular because PCI cards are less costly than SPD. However, because not all devices can be attached with PCI cards, SPD cards still exist in high-end RISC models. During this step, all buses are initialized for all I/O devices. C600 4007Start error log ID. An error log ID is created to log hardware and software errors that may occur. C600 4008Initialize I/O service, and C600 4009Initialize I/O ma chine. These two processes prepare the I/O devices to be used. C600 4010Initialize interactive device exerciser (IDE). C600 4011Initialize remote services. C600 4012Initialize RMAC data values. C600 4013Initialize context management. C600 4014Initialize RM seize lock. C600 4015Initialize MISR. C600 4016Set time of day. C600 4017Initialize RM process management. C600 4018Initialize error log. The error log is prepared to receive log entries. C600 4019Reinitialize the service processor. The service processor is used to start the system processor. This step resets the service processor.
57

C600 4020Initialize machine services. C600 4021Initialize performance data collector. The performance data collector is prepared to gather information about the system regarding response times and throughputs. An example of such a job is job name QPFRCOL running in the QCTL subsystem. C600 4022Initialize event manager. C600 4023Create Machine Interface (MI) boundary manager tasks. The Tec hnology Independent Machine Interface (TIMI) is a logical rather than physical interface to the system hardware. The MI architecture provides a complete set of APIs for OS/400 and all application programs. The boundary manager provides the method of communication between the hardware and system software. Frank Soltis Inside the AS/400 contains a detailed explanation of the MI. (See the References section at the end of this article.) C600 4024Disable Continuously Powered Main Storage (CPM). This step is a little confusing. CPM is available on certain AS/400 models to supply main storage power for a short time to allow an orderly system shutdown in the event of power failure. CPM is disabled during this step and is made available at each IPL. CPM is enabled only when utility power is interrupted. It may be necessary to disable CPM to make specific repairs to the system. C600 4025Initialize battery test. If the system has an internal battery, it is tested at this point. If the test fails, the system remains operational, but the system attention light may be lit and an SRC code may be displayed while the system is running. C600 4026Hardware card checkout. C600 4028Start dedicated service tools (DST). During an attended IPL, the DST menu is displayed at this point, allowing DST options to be used. Some of the options that might be used at this time are to start or suspend mirroring, add or remove disk units from the auxiliary storage pool (ASP), start or stop device parity protection or RAID, and other similar tasks where the system must be in a dedicated state. C600 4030Free static storage. C600 4031Destroy IPL task. The system performs a cleanup, removing unnecessary IPL job steps from the system. C600 4205Synchronization of mirrored data. The system checks the integrity of data on mirrored pairs of disk units. If the last power-down was normal, this operation can take just a minute or so per each set of drives. However, if the last power-down was abnormal or you opted to start or stop mirroring from the DST menu, this step can take several hours, depending on the storage size of the drives. C600 4056Journal recovery. If the system ends abnormally, database files in the journal are automatically recovered during this procedure. The database files are updated to reflect all activity recorded in the journal receivers. If the system ends abnormally, this may be a lengthy procedure. C600 4065Start operating system. This function starts the operating system, which is loaded onto the AS/400. OS/400 is the operating system of choice, but, for the advanced 36, SSP is also part of the operating system.

Loading the Operating System


At this point, LIC initialization is complete, and the operating system has started. All of the hardware has been tested and verified. C9XX 2XXX are the tasks required to start the operating system: C900 2830Resolve system objects. The first step in starting the operating system is to locate all of the system objects needed to start the operating system. In the system exists a resolve instruction that uses the name, type, and authority being requested from the unresolved pointer. The libraries on the library list are then searched until the object is found. Once located, the object is said to be resolved. C900 28C5Initialize system objects. After all objects required to load the operating system have been located, they can then be used or initialized.
58

C900 2910Start system log. The system starts logging messages to the log file. If you display the QHST log after the IPL is complete, you can view messages logged from this point forward. C900 2920Library and object information repository (OIR) cleanup. In SystemView System Manager/400, OIR consists of information about each object that identifies its associated product, such as release level, option, and load identifier. C900 2925Verify POSIX root directories. POSIX is a collection of international standards for UNIXstyle operating system interfaces. An example of where POSIX standards are used is the AS/400 Integrated File System (IFS) announced for V3R1. C900 2930Database cross-reference. C900 2960Sign-on processing. The system prepares for user access. C900 2965Software Management Services (SMS) initialization. SMS provides the user with consistent distribution, installation, and service strategy. It allows you to save and install user-written application software as though it were licensed. C900 2A85Load POSIX SAG. C900 2967Applying PTFs. When PTFs are loaded onto the system, some of them are applied immediately while others affect hardware and system software and require an IPL to be applied. C900 2968IPL options. C900 2970Database Recovery, Part 1: Journal commit. If the last power-down was normal, this step should be fairly quick. If the last power-down was abnormal, the system recovers what it can from the journal receivers and automatically performs a rollback if a commit was not processed for files that were under commitment control. This option also rebuilds access paths if the system determines that logical files were open when the abnormal power-down occurred. This step can be time -consuming. C900 29B0Spool initialization. C900 29C0Write control block table. A control block is a storage area used by a program to hold control information. In this instance, the system sets up a table for system jobs to use. C900 2A90Start system jobs. Some of the jobs that the system starts at this time are in the QSYSWRK and QALERT subsystems. C900 2AA0Damage notification. Every system object contains header information pertaining to the object. The first header is called the segment header, and the second header is the Encapsulated Program Architecture (EPA) header. The EPA header contains an attribute byte that defines the object as permanent or temporary and determines whether or not the object is suspended or damaged. There are two types of object damage: hard or soft. An object with hard damage is not usable; it can only be removed. Soft damage indicates that some data can still be extracted from the object. One source of damage is bad sectors on a disk drive. If storage management cannot read these sectors, it uses the EPA header to flag the object as damaged. C900 2AA5IFS directory recovery. The same function performed on the DB2/400 database is performed for the IFS. If an abnormal power-down occurs, this step may be extended. C900 2AC0DLO recovery. The system recovers objects that may have been in use during an abnormal power-down or system crash. Folders are examples of DLOs. C900 2B10Establish event monitors. An event is an activity during a machine operation that may be of interest to a user. An example of an event is an I/O operation, such as reading a record from a disk initiated by a read operation from an application progra m. The mechanism used to report completion of the I/O process is an event because it is caused by an action outside the application program currently executing. The actual I/O processing takes place at the MI level. System arbiter jobs are an example of event monitor. The system arbiter, identified by job name QSYSARB and QSYSARB2 through QSYSARB5, is the central and highest-priority job within the operating system. Each system arbiter responds to

59

systemwide events that must be handled immediately and those that can be handled more efficiently by a single job rather than multiple jobs. C900 2B30Start QLUS job. The logical unit services, identified by job name QLUS, support communication devices. The system arbiter starts QLUS even if no communication devices are configured on the system. QLUS is the event handler for logical unit (communication) devices and also acts as their manager. C900 2B40Device configuration. C900 2C40Work control block table cleanup. At this point, the system performs a cleanup on the control block table written in step C900 39C0.

Why Is My System Slow?


That was a high-level look at just about every SRC code youre likely to see during an IPL. When you see 01 B N displayed on your AS/400, you may think the IPL is finished. Well, not quite. Although the operating system initialization is complete when the sign-on screen appears on the console, internal procedures are still happening that are part of the overall IPL process. If you log on during this stage, you may discover that your response time is slower than normal. This slowdown happens because the last IPL event, running the startup program identified by system value QSTRUPPGM, occurs at this point. The startup program determines which subsystems should be started as well as any other functions you wish to run. The runtime for this program depends on the number of subsystems started and the number of devices under each subsystem that must be activated.

Use CHGIPLA to Customize Your IPL


To make IPL operation faster, you can specify the level of diagnostic testing. Starting with V4R1, a change was made to the Power Down System (PWRDWNSYS) command. There are three restart types that may now be specified: *IPLAThe value specified on CHGIPLA is used. *SYSThe operating system is restarted, and the hardware is restarted only if a PTF that requires a hardware restart is to be applied. In other words, the I/O processors are not IPLed unless a patch has been made to the software running on these processors. *FULLAll portions of the system are restarted, including the hardware. CHGIPLA, shown in Figure 1, has several options that you can use to reduce IPL time even further: Restart Type is the same as PWRDWNSYS. You can specify *SYS or *FULL. The initial value of the command is *SYS. Hardware diagnostics specify whether certain hardware diagnostics should be performed during the IPL. The list of diagnostics is predetermined by the system and cannot be modified by the user. There are two options for these diagnostics: *MIN, whereby the system performs a minimum set of critical hardware diagnostics, and *ALL, whereby the system performs a complete set of hardware diagnostics (the shipped value for this attribute is *MIN). Compress Job Tables specifies when job tables should be comp ressed to remove unused entries. Excessive unused entries can result in poor performance during IPL steps that process the table and during runtime functions that work with jobs. Check Job Tables specifies when a damage check on job tables should be performed. The possible values are: *ABNORMALjob tables are checked during abnormal IPL only. This is the recommended setting. *ALLjob tables are checked during all IPLs. *SYNCthe job table checks are performed synchronously during all IPLs. The system maintains a product directory of all installed licensed programs. Normally, it is not necessary to rebuild this directory
60

after initial installation of the system; it is rebuilt automatically when the operating system is installed. The possible values are: *NONE indicates the product directory is not fully rebuilt. *NORMALrebuilds the product directory during normal IPLs only. *ABNORMALrebuilds the product directory after an abnormal IPL. *ALLrebuilds the product directory after all IPLs.

Reducing Required IPL Time


Another method for reducing IPL time is to set the automatic performance adjustment system value to 0 (no adjustment) or 3 (automatic adjustment). A setting of 1 or 2 performs adjustments at IPL time. When you set your system to make adjustments at IPL time, performance settings are calculated based on the number of devices and network interfaces and the total amount of main storage. If your system is stable, these calculations have the same result each time and adjustments are not made. To reduce the amount of time required to rebuild access paths in the event of an abnormal power-down, logical files may be kept in a journal. Although this article may not make the rather dull process of an IPL seem interesting, I hope that I have provided some insight into the process and explained the new IPL options for Version 4.

References
AS/400 Basic System Operation, Administration, and Problem Handling (SC41- 5206-01, CD-ROM QB3AGO00) AS/400 Master Glossary (SC41-5006-01, CD-ROM QB3AIG00) AS/400 Service Functions (SY44-590201) Inside the AS/400, 2nd Edition. Soltis, Frank G. Loveland, Colorado: 29th Street Press, 1997

Figure 1: CHGIPLA offers you several options for reducing IPL time.

61

Question: Hi, Have a Question on CPYTOIMPF converting to csv. For the Numeric fields, the leading Zeros are removed (which is fine).. but the trailing zeros are shown as balnks & the comma separator is shown at the length of the Numeric field. Is there a way to supress trailing Zero/Blank to show the Numeric data Thanks in Advance. Kris there is a very simple solution : for example, look at this SQL that produces a wide range of SQL column type select current date today , current time now , current timestamp a_timestamp , user me , decimal(days(current date), 7, 0) a_decimal_data , zoned(days(current date), 7, 0) a_zoned_data , double(days(current date) ** 12 ) float8_data , bigint(days(current date) ** 3 ) Integer8_data from qsys2/qsqptabl notice : qsqptabl is a one-row table provided with DB2 that permits (also) to obtain data from any SQL register within a one-row SELECT answer now, look at this SQL : the previous result (a table with many columns) is now a CSV flat file with TAB separator select char(current date,iso) concat x'05' concat char(current time, iso) concat x'05' concat char(current timestamp) concat x'05' concat user concat x'05' concat char(decimal(days(current date), 7, 0)) concat x'05' concat char(zoned(days(current date), 7, 0)) concat x'05' concat char(double(days(current date) ** 12 )) concat x'05' concat char(bigint(days(current date) ** 3 )) from qsys2/qsqptabl How to uses it in a clp ? - run the SQL from a QMQRY output(*FILE) - copy the resulting file to the IFS.

FTP ZIP Files to QSYS and unzip If you can not copy your ibm file to ifs with windows explorer, you can uses ftp (such as you have done) but change the name format : FTP ... ... namefmt 1 put thepcfile.zip /ifsdir/filename.zip here after the 400 help on namefmt NAMEFMT (Select File Naming Format) To select which file naming format to use on the local system, and the remote system if it is an AS/400 system, use the NAMEFMT subcommand as follows: NAmefmt ?0 ! 1| NAmefmt You can abbreviate subcommands to the most unique series of characters.
62

The current setting is displayed when no parameter is specified. 0 A naming format only for the library file system database files. This format was available prior to Version 3 Release 1. The general format is: libname/filename.mbrname 1 A naming format for all file systems supported by FTP including the hierarchical file systems, the integrated file systems and the library file system. This naming format must be used to work with the hierarchical file systems and integrated file systems. This naming format is available at Version 3 Release 1. Library file system files in this naming format are: /QSYS.LIB/libname.LIB/filename.FILE/mbrname.MBR The document library services, an HFS file system, has the following format: /QDLS/libname/filename.ext For optical, the format is: /QOPT/volname/dirname/filename.ext Remarque : The name format can only be changed to 0 when the working directory is a library file system library. For related information, see the following: o Using FTP o Changing from one server file system to another o Integrated File System Introduction information in the iSeries

The Integrated File System


Recent e-mails from several faithful readers have reminded me that I should pay more attention to the fundamentals than I do. For that reason, I've delayed for one month the article that I was going to run this time and have instead written about the Integrated File System from the ground up. The IFS is too important to be ignored. Next month I will cover a more advanced topic that I hope many of you will find practical and of interest. One of these recent e-mails was from a reader who thought that IFS was just a fancy name for the folders system in which OfficeVision documents were stored. Another devoted reader sent me a Qshell script that he could not get to run properly. Since I was as busy as Wal-Mart is the day after Thanksgiving, I passed the script along to my Qshell authority, Fred Kulack of IBM, who found the problem. The faithful reader was running the script in the folders system. Fred changed the script to run in the root system of the IFS, and the script ran perfectly. I am not sure why the reader thought he had to store Qshell scripts in the folders system. Another reader was blunt: "[I'm] still waiting to see an article on the IFS." These are not the only e-mails I've received about the IFS, but they are representative. In the Beginning It used to be that you could store any type of data in a computer system, as long as it fit on 80-column paper cards. Tape files and disk files came later, but regardless of the media, in the world of data processing, everything had to fit into fields, within records, within files. When the good people at IBM added office functions, like word processing, to the midrange systems, they were well aware that documents (letters, books, contracts, and the like) did not fit into the data-processing mold. For this reason, IBM added a file system, based on folders and documents, to the midrange systems. You can still find folders on your i5-iSeries-AS/400 machine. Type GO CMDFLR at a command line and press Enter, and you'll find plenty of commands for working with folders and documents. But the addition of the folders and documents file system to the library-based object system wasn't enough for all storage needs. The folks at IBM decided to fix the problem once and for all. Or, at least, I assume that's what they decided, because that's what they did. In V3R1, IBM introduced a file system capable of holding any kind of data that might ever be thrown at it. Here we are at V5R3, and some folks still don't know the Integrated File System exists. A Frame work Maybe the Integrated File System should be called the Integrated File Systems, because the IFS is not one file system, but a collection of file systems. The IFS includes the library -based system of strongly typed objects and the folders system, as well as other file systems. Its design allows for other file systems to be added in the future. In fact, you can even design your own file system for inclusion in the IFS. The primary file system, the one that predominates over all others, is known as the root file system. It is similar to the file system on Windows-based PCs. There is a main directory, similar to the PC hard drive's root directory. Within this directory, you may define other directories or files. Each of the other file systems is defined as a subdirectory within the root. This is where the IFS differs sharply from the PC's hard drive. On a PC, each subdirectory adheres to the same rules of the root directory. In the IFS, on the other hand, each file system under the root adheres to its own rules.
63

To access the root file system, run the following CL commands.


cd / wrklnk

The system presents the Work with Object Links panel, which displays the contents of a directory. Look for QDLS. That's the folders file system, the same you access with the Work with Folders (WRKFLR) command. Next look for QSYS.LIB. That's the library system. If you type a 5 in the option blank to the left of QSYS.LIB and press Enter, you'll see the same objects you would see if you viewed the QSYS library from the Work with Objects (WRKOBJ) and Work with Objects using PDM (WRKOBJPDM) panels. I'm sorry to burst your bubble, but the system of objects organized into libraries is not the main file system on your AS/400, iSeries, i5, or whatever you call your machine. The root system is the main thing. The File Systems The IFS contains different file systems, depending on release of OS/400 or i5/OS. The V5R3 Infocenter lists 11 file systems. ? ? ? ? ? ? ? ? ? ? ? "root" (/) Open systems file system (QOpenSys) User-defined file system (UDFS) Library file system (QSYS.LIB) Independent ASP QSYS.LIB Document library services file system (QDLS) Optical File System (QOPT) NetWare file system (QNetWare) iSeries NetClient file system (QNTC) OS/400 File Server file system (QFileSvr.400) Network file system (NFS)

I've already told you that the folders system (QDLS) and the library file system (QSYS.LIB) are found under the root system. Notice that the CD drive (QOPT) is also part of the file system. As I said, each file system has its own characteristics. For example, do you see QOpenSys in the list? That's a Unix-like file system, and as such, it has case-sensitive file names. Yep, that's right. Cat, cat, and CAT are three different animals to Unix and QOpenSys. Interfaces to the IFS You may access the IFS through green-screen or client-based user interfaces. Let's look at green screens first, since you can always depend on them, even when the network's down. The main access point to the IFS is CL's Work with Object Links (WRKLNK) command, which I introduced above. If you're old enough to remember MS-DOS, you'll see WRKLNK is similar to the DIR command. WRKLNK shows you all the files and directories within a directory. Since WRKLNK is based on the work-with standard, you can key option numbers to interact with the directory contents. There are plenty of other CL commands designed to be used with the IFS. Rather than list them all here, please let me refer you to some menus: FILESYS, FSDIR, FSOBJ, FSSEC, CMDLNK, and CMDFILE. You can access these with the GO command.
GO FSDIR

You will find most, if not all, IFS-related commands on these menus. Qshell provides a second and, to my taste, preferable interface to the IFS. Qshell is a Unix-like command shell that runs on i5, iSeries, and AS/400 systems. Qshell is suited to life in a directory-based world and includes many commands for manipulating IFS files. Here are some of the ones I use most often. ? ? ? ? ls (list directory contents) mv (move or rename a file or directory) rm (remove directory entries) cd (change directory) I have written several articles and numerous tips about Qshell for this newsletter; use this site's search link to find them. If you want to know even more, buy my book. You will not only find a lot of information about Qshell but also be feeding me, my wife, and five kids. Client Interfaces

64

There are many ways to access the IFS from other systems. I will list some of them and wait for e-mail asking me why I didn't mention others. C'est la vie. Windows Explorer. Create a file share and drag-and-drop 'til your heart's content. FTP. Use the namefmt command with an argument of 1 to change to directory-based naming. Then use cd to access IFS directories. WDSC. Working with the IFS is easy from WDSc. You can use LPEX and CODE/400 to edit files. You can even drag and drop between file systems. iSeries Navigator. This is not my favorite, but it works. Why IFS? Here are a few reasons I can think of to use the IFS. There are plenty more, because people have many needs. Use the IFS for reading and writing stream files. Stream files consist of variable-length records delimited by an end-of-line character, and are common on PC and Unix systems. Use the IFS to store non-database objects from other systems. Graphics files, spreadsheets, and PDF files come to mind. Use the IFS for storing Java source code. As far as I know, the Java compiler won't read source code from a source physical file, but even if it will, the IFS is more to its tastes. Because of its case-sensitive nature, QOpenSource might be a good choice, but I have always used root. Use the IFS for Qshell scripts. The root system is a good choice here. Qshell scripts will run from source physical files, but they run faster from the root system. As I pointed out in the introduction, Qshell scripts don't run well at all from QDLS. Speaking of source code, did you know that RPG's /COPY and /INCLUDE compiler directives can read from the IFS? So far I have not found any reason to use this feature. Let's Get Started Here's a little exercise to help you get started using the IFS. Step 1. Run the following commands from a CL command line.
cd / wrklnk

Step 2. You are looking at the root directory. Look for a directory called /home. If there's not such a directory, create one.
md /home

Step 3. Press F3 to leave the Work with Object Links panel. Now it's time to create your own directory under the home directory. If there is a standard in your shop for private directory names, follow it. Otherwise, I suggest you use your first initial and last name. Here's how Joe Smith would create his directory.
md '/home/jsmith'

Step 4. Create a CSV file in your directory from a database file of your choice.
CPYTOIMPF FROMFILE(QIWS/QCUSTCDT) TOSTMF('/home/jsmith/qcustcdt.csv') STMFCODPAG(*PCASCII) RCDDLM(*CRLF)

Step 5. Use your transfer method of choice to copy the file from the IFS to your PC. I suggest FTP. Here's part of a session from a PC-based FTP client.
quote site namefmt 1 cd /home/jsmith lcd /temp get qcustcdt.csv

Step 6. Open the PC file with a spreadsheet. 1. APIs BY EXAMPLE: A MACHINE INTERFACE (MI) COMPILER In the last issue, I reported on a few sources for MI documentation. Several readers wrote to ask if I had a MI compiler that I could send them. I do indeed have a MI compiler, and I thought I'd share it with all of you. Three items make up the compiler. They are: * Command CrtMIPgm (Create MI Program) This is the command used to create a MI program. * CLLE program CrtMIPgmC This is the command processing program for command CrtMIPgm. This program calls RPG program CrtMIPgmR to complete the process.
65

* RPG IV program CrtMIPgmR This program is called by command processing program CrtMIPgmC and is used to set up and make the call to the Create Program (QPRCrtPg) API that creates the MI program. Rather than spend pages documenting fully API QPRCrtPg, I hit the highlights. For the full documentation, visit http://publib.boulder.ibm.com/iseries/v5r2/ic2924/info/apis/qprcrtpg.htm. Let's begin with a look at the API's input parameter requirements. * Intermediate representation of the program This is a string (e.g., field, array) containing the MI source statements. * Length of the intermediate representation of the program This is the length of all the MI source statements combined. * Qualified program name This is the qualifed name to use for the compiled program. Special value *CURLIB is valid for the library portion. * Program text This short descriptive text is used as the program object's text attribute. * Qualified source file name This field is for informational purposes. The API doesn't use the values in this field to determine where to find the source used in creating the program. Rather, the API places the values in this field in the program object's service attributes that identify the source file used in creating the program (i.e., the source file information you see when you use DspObjD to display the program's object description). You can specify anything you wish for this field, valid or not! You are responsible for ensuring the integrity of this information. (By the way, special values sich as *LIBL for the library are not allowed.) Special value *NONE for the source file name is supported and when used, the API doesn't place source file information in the service attributes of the object description. * Source file member Like the qualified source file name, this field is for informational purposes and you are responsible for its integrity. The API places this value in the program object's service attributes that identify the source member used in creating the program. This value must be blank if you specify *NONE for the qualified source file name. * Source file last changed date and time This is another field used for informational purposes. As with the previous two source information fields, you're responsible for the integrity of this field. This field is of the format CYYMMDDHHMMSS and is placed in the program object's service attributes that identify the source member's last changed date and time. This value must be blank if you specify *NONE for the qualified source file name. * Qualified printer file name This is the qualifed name of the printer file used for listings generated by the API. Special values *CURLIB and *LIBL are valid for the library portion. If you specify *NOLIST in the option template parameter (discussed shortly), this parameter is ignored. * Starting page number This field determines the page number generated listings are to begin with. The default value is 1. If you specify *NOLIST in the option template parameter, this parameter is ignored. * Public authority This field defines the authority to give users that do not have specific private authorit ies, and where the user's group has no specific authority, to the object. Valid values include special values *CHANGE, *ALL, *USE, *EXCLUDE or you can Specify an authorization list by name. * Option template This is an array of options used in creating the program. These options are optional (see how that works!) and you can specify up to 16 of them. These options control things such as whether to generate an executable program, whether to generate listings, and whether to optimize the program. There is considerable information on this options so I refer you to the previously mentioned online documentation for details. * Number of option template entries This is simply a count of the number of options specified in the option template parameter. The API also has an optional parameter used as input/output - the standard API error structure. Let's begin our look at the compiler with command CrtMIPgm as follows. /* =============================================================== */
66

/* /* /* /* /* /* /* /*

= Command....... CrtMIPgm = = Description... Create Machine Interface Program = = CPP........... CrtMIPgmC = = Source type... Cmd = = Compile....... CrtCmd Cmd(YourLib/CrtMIPgm) = = Pgm(YourLib/CrtMIPgmC) = = PrdLib(YourLib) = =============================================================== Cmd Parm Prompt( 'Create MI Program' ) Kwd( Pgm ) Type( QPgm ) Min( 1 ) Prompt( 'Program' ) Kwd( SrcFile ) Type( QSrcFile ) Prompt( 'Source file' ) Kwd( SrcMbr ) Type( *Name ) Len( 10 ) Dft( *Pgm ) SpcVal( ( *Pgm ) ) Expr( *Yes ) Prompt( 'Source member' ) Kwd( Text ) Type( *Char ) Len( 50 ) Dft( *SrcMbrTxt ) SpcVal( ( *SrcMbrTxt ) ) Expr( *Yes ) Prompt( 'Text ''description''' ) Kwd( UsrPrf ) Type( *Name ) Len( 10 ) Rstd( *Yes ) Dft( *User ) SpcVal( ( *User ) ( *Adopt ) ( *Owner ) ) Expr( *Yes ) PmtCtl( *PmtRqs ) Prompt( 'User profile' ) Kwd( Replace ) Type( *Char ) Len( 10 ) Rstd( *Yes ) Dft( *Yes ) SpcVal( ( *Yes *Replace ) ( *No *NoReplace ) ) Expr( *Yes ) PmtCtl( *PmtRqs )
67

*/ */ */ */ */ */ */ */

+ + +

Parm

+ +

Parm

+ + + + + + + +

Parm

+ + + + + + + +

Parm

+ + + + + + + + + + + +

Parm

+ + + + + + + + + + +

Prompt( 'Replace program' ) Parm Kwd( Aut ) Type( *Name ) Len( 10 ) Dft( *LibCrtAut ) SpcVal( ( *LibCrtAut ) ( *Change ) ( *All ) ( *Use ) ( *Exclude ) ) Expr( *Yes ) PmtCtl( *PmtRqs ) Prompt( 'Authority' ) Kwd( GenOpt ) Type( *Char ) Len( 11 ) Dft( ) Rstd( *Yes ) SpcVal( ( *Gen ) ( *NoGen ) ( *NoList ) ( *List ) ( *NoXRef ) ( *XRef ) ( *NoAtr ) ( *Atr ) ( *AdpAut ) ( *NoAdpAut ) ( *SubScr ) ( *NoSubScr ) ( *UnCon ) ( *SubStr ) ( *NoSubStr ) ( *ClrPSSA ) ( *NoClrPSSA ) ( *ClrPASA ) ( *NoClrPASA ) ( *NoIgnDec ) ( *IgnDec ) ( *NoIgnBin ) ( *IgnBin ) ( *NoOverlap ) ( *Overlap ) ( *NoDup ) ( *Dup ) ( *Opt ) ( *NoOpt ) ) Max( 14 ) Expr( *Yes ) PmtCtl( *PmtRqs ) Prompt( 'Generation options' ) Type( *Name ) Len( 10 ) Min( 1 ) Expr( *Yes ) Type( *Name ) Len( 10 )
68

+ + + + + + + + + + + + +

Parm

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

QPgm:

Qual

+ + +

Qual

+ +

Dft( *CurLib ) SpcVal( ( *CurLib ) ) Expr( *Yes ) Prompt( 'Library' ) QSrcFile: Qual Type( *Name ) Len( 10 ) Dft( QMISrc ) Expr( *Yes ) Type( *Name ) Len( 10 ) Dft( *LibL ) SpcVal( ( *LibL ) ( *CurLib ) )

+ + + + +

+ + +

Qual

+ + + + + + +

Expr( *Yes ) + Prompt( 'Library' ) Comparing the command's parameters to API QPRCrtPg's aforementioned input requirements shows that many of the API's parameters come directly from command CrtMIPgm. Command CrtMIPgm does not let you specify the following parameters required by QPRCrtPG: * Intermediate representation of the program The MI source statements you present to API QPCrtPg are simply a string that can be derived any way you wish (e.g., field contents, array contents). The CrtMIPgm utility, however, requires that you place the MI statements in a source file. * Length of the intermediate representation of the program This is a calculated value. * Source file last changed date and time CrtMIPgm retrieves this information from the source file member used to create the program. * Qualified printer file name For simplicity's sake, the utility always uses print file QsysPrt. * Starting page number For simplicity's sake, the utility always starts generated listings on page 1. * Number of option template entries This is a calculated value. * API error structure This structure is internal to the programs used by CrtMIPgm. You may also notice that I mentioned you could specify up to 16 generation options in the option template parameter, but that command CrtMIPgm allows a maximum of 14. The command specifies an individual parameter for two of the these options, UsrPrf (user profile) and Replace (replace program). Program CrtMIPgmR adds the UsrPrf and Replace parameters to the option template before calling API QPRCrtPg. Command processing program CrtMIPgmC is a CLLE program that runs in a new activation group. CrtMIPgmC follows. /* =============================================================== */ /* = Program....... CrtMIPgmC = */ /* = Description... Create Machine Interface Program = */ /* = Command processing program for CrtMIPgm = */ /* = Source type... CLLE = */ /* = Compile ...... CrtBndCL Pgm(YourLib/CrtMIPgmC) = */ /* = DftActGrp(*No) = */ /* = ActGrp(*New) = */ /* =============================================================== */ Pgm ( &Pgm
69

+ + +

&SrcFile &SrcMbr &Text &UsrPrf &Replace &Aut &GenOpt ) /* /* /* =============================================================== = Declarations = =============================================================== Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl Dcl /* /* /* &Pgm &SrcFile &SrcMbr &Text &UsrPrf &Replace &Aut &GenOpt &PgmNm &PgmLib &SrcFileNm &SrcFileLib &SrcChgDate &CurText &NbrCurRcd &ErrorFlag &MsgID &MsgDta &MsgF &MsgFLib *Char *Char *Char *Char *Char *Char *Char *Char *Char *Char *Char *Char *Char *Char *Dec *Lgl *Char *Char *Char *Char ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 20 20 10 50 10 10 10 156 10 10 10 10 13 50 10 7 256 10 10 ) ) ) ) ) ) ) ) ) ) ) ) ) ) 0 ) ) ) ) )

+ + + + + + +

*/ */ */

=============================================================== = Global error monitor = =============================================================== MonMsg GoTo ( CPF0000 MCH0000 ) Exec( Error )

*/ */ */ +

/* /* /*

=============================================================== = Initialization = =============================================================== ChgVar ChgVar ChgVar ChgVar If ChgVar RtvMbrD ( &PgmNm ) ( &PgmLib ) ( %Sst( &Pgm 1 10 ) ) ( %Sst( &Pgm 11 10 ) ) ( %Sst( &SrcFile 1 10 ) ) ( %Sst( &SrcFile 11 10 ) )

*/ */ */

( &SrcFileNm ) ( &SrcFileLib )

( &SrcMbr *Eq '*PGM' ) ( &SrcMbr ) ( &PgmNm ) File( &SrcFileLib/&SrcFileNm ) Mbr( &SrcMbr ) SrcChgDate( &SrcChgDate ) Text( &CurText ) NbrCurRcd( &NbrCurRcd ) ( &Text *Eq '*SRCMBRTXT' ) ( &Text ) ( &CurText ) ( &Aut *Eq '*LIBCRTAUT' )

+ + + +

If ChgVar If Do

70

Lib( &PgmLib ) CrtAut( &Aut ) If ( &Aut *Eq '*SYSVAL' ) RtvSysVal SysVal( QCrtAut ) RtnVar( &Aut ) EndDo /* /* /* =============================================================== = Call program to create MI program = =============================================================== OvrPrtF File( QSysPrt ) SplFName( &PgmNm ) OvrScope( *Job ) CrtMIPgmR ( &Pgm &SrcFile &SrcMbr &Text &UsrPrf &Replace &Aut &GenOpt &SrcChgDate &NbrCurRcd &MsgID &MsgDta ) ( &MsgID *NE ' ' ) MsgID( &MsgID ) MsgF( QCPFMsg ) MsgDta( &MsgDta ) ToPgmQ( *Same ) MsgType( *Escape )

RtvLibD

+ + +

*/ */ */ + +

Call

+ + + + + + + + + + + + + +

If SndPgmMsg

+ + + + +

/* /* /*

=============================================================== = Exit program = =============================================================== DltOvr File( QSysPrt ) Lvl( *Job )

*/ */ */ +

Return /* /* /* =============================================================== = Error routine = =============================================================== */ */ */

Error: If Return ChgVar DltOvr ( &ErrorFlag ) File( QSysPrt ) Lvl( *Job ) MsgType( *Excp ) MsgDta( &MsgDta ) MsgID( &MsgID ) MsgF( &MsgF )
71

( &ErrorFlag )

( '1' ) +

RcvMsg

+ + + +

MonMsg SndMsg: SndPgmMsg

MsgFLib( &MsgFLib ) ( CPF0000 MCH0000 )

MsgID( &MsgID ) MsgF( &MsgFLib/&MsgF ) MsgDta( &MsgDta ) MsgType( *Escape ) ( CPF0000 MCH0000 )

+ + +

MonMsg

EndPgm Program CrtMIPgmC begins by initializing a few pieces of information including: * Source member name * Source change date and time * Text * Public authority Next, the program overrides print file QsysPrt so that the spooled file name of any generated spooled file placed in an output queue will be the same as the program being created. This simplifies the act of locating any generated lists and is consistent with other program creation commands. The program then calls RPG program CrtMIPgmR, whose job it is to construct the parameters required by API QPRCrtPg and then invoke the API. Notice that the final two parameters on the call to CrtMIPgmR are MsgID (message identifier) and MsgDta (message data). If the call to API QPRCrtPG fails, program CrtMIPgmR extracts the message identifier and the message data reported in the standard API structure and returns that information so that CrtMIPgmC can use it to report the error to the user. The final program used in compiling an MI program, CrtMIPgmR, follows. // ================================================================= // = Program....... CrtMIPgmR = // = Description... Create Machine Interface Program = // = Source type... RPGLE = // = Compile....... CrtBndRPG Pgm(YourLib/CrtMIPgmR) = // = DftActGrp(*No) = // = ActGrp(*Caller) = // ================================================================= FMISrc F F // // // IF F 92 Disk UsrOpn ExtFile( SrcF ) ExtMbr( SrcMbr )

================================================================= = Entry parameters = ================================================================= Pr 20 20 10 50 10 10 10 LikeDS( GenOptInModel ) 13 10P 0 7 256 PI 20 20 10 50


72

D EntryParms D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterOut D ParameterOut D EntryParms D Pgm D SrcFile D SrcMbr D Text

ExtPgm( 'CRTMIPGMR' )

D D D D D D D D

UsrPrf Replace Aut GenOptIn SrcChgDate NbrCurRcd MsgID MsgDta // // //

10 10 10 LikeDS( GenOptInModel ) 13 10P 0 7 256

================================================================= = Procedure prototypes = ================================================================= Pr ExtPgm( 'QPRCRTPG' ) 80 Dim( 32767 ) 10I 0 20 50 20 10 13 20 10I 0 10 176 10I 0 LikeDS( StdErrorModel )

D CreateProgram D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIn D ParameterIO // // //

================================================================= = Data definitions = ================================================================= DS Qualified 10I 0 Inz( %Size( StdErrorModel )) 10I 0 Inz( *Zero ) 7 Inz( *Blank ) 1 Inz( X'00' ) 256 Inz( *Blank ) Based( GenOptInModelPtr ) Qualified 5I 0 154 DS LikeDS( StdErrorModel ) Inz( *LikeDS ) 176 10I 0 80 21 10I 20 10I 3I 5I Dim( 32767 )

D StdErrorModel D D BytesAvail D MsgID D D MsgDta D GenOptInModel D D NbrOpts D Opt D StdError D D GenOpt D NbrGenOpts D Src D D D D D D SrcF SrcLen PrtF StrPage Pos Index NS

DS

S S S S S S S S S

0 Inz( 'QSYSPRT 0 Inz( 1 ) 0 0 *LIBL' )

IMISrc I /Free //

13

92

SrcInfo

=================================================================
73

// //

= Open source file = ================================================================= SrcF = %Trim( %Subst( SrcFile : 11 : 10 ) ) + '/' + %Trim( %Subst( SrcFile : 1 : 10 ) ) ; Open MISrc ;

// // //

================================================================= = Set parameters for API to create program = ================================================================= SrcLen = NbrCurRcd * 80 ; Pos = ( GenOptIn.NbrOpts * 11 ) ; If GenOptIn.NbrOpts > *Zero ; GenOpt = %Subst( GenOptIn.Opt : 1 : Pos ) ; EndIf ; Pos = Pos + 1 ; %Subst( GenOpt : Pos : 11 ) = Replace ; Pos = Pos + 11 ; %Subst( GenOpt : Pos : 11 ) = UsrPrf ; NbrGenOpts = GenOptIn.NbrOpts + 2 ;

// // //

================================================================= = Load source to instruction stream parameter = ================================================================= Read MISrc ; DoW Not( %EOF( MISrc ) ) ; Index = Index + 1 ; Src( Index ) = SrcInfo ; Read MISrc ; EndDo ;

// // //

================================================================= = Close source file = ================================================================= Close MISrc ;

// // //

================================================================= = Call API to create the MI program = ================================================================= CreateProgram ( Src : SrcLen : Pgm : Text : SrcFile : SrcMbr : SrcChgDate : PrtF : StrPage : Aut : GenOpt : NbrGenOpts : StdError ) ; If StdError.BytesAvail <> *Zero ;
74

MsgID = StdError.MsgID ; MsgDta = StdError.MsgDta ; EndIf ; *InLR = *On ; /End-Free The program begins with a file specification naming MISrc as an input file. This is a program described file used in reading your MI source member. The program actually opens the source file and member specified by the values in fields SrcF and SrcMbr that appear for the ExtFile (external file) and ExtMbr (external member) keywords, respectively. For simplicity's sake, I didn't allow for full flexibility in the size of the source record - it just didn't seem necessary. The source file record should be 92 bytes in size (the default). This gives you 80 bytes of source data per record. After defining the entry parameters, the program defines the procedure prototype type for API QPRCrtPG. Notice that the first parameter is defined as 80 bytes in length (remember, that's the source data length) and is defined as an array with a dimension of 32767. The elements in this array will contain the source statements with each element being a record in the source file. If you need more than 32767 records for a single MI program, chances are you've missed the point in your design! Next, appear the data definitions. They're straightforward and need no discussion. Finally, we come to the processing. The basic steps are as follows: Open the source file. Initialize the source length, generation option template, and number of generation options parameters needed by API QPRCrtPg. Read source file and store source records in the array passed to API QPRCrtPg. Close the source file. Call API QPRCrtPg to create the program and if the API returns an error condition, report that condition to the caller by setting the MsgID and MsgDta parameter values. ******************************************************************************** Hi, Can any one suggest me a way to update a Physical File thru CLLE program. I am not interested in using RUNSQLSTM. Regards, Bodhi Hello, 1)use STRQMQRY or create your own comand based on it http://www.as400pro.com/servlet/sql.tipView?key=159&category=SQL 2) for static statement call QZDFMDB2 ('update yourlib.yourfile set field=1 where field=2') or QSH CMD('db2 ''update yourlib.yourfile set field=1 where field=2''') -Regards Tom mail4bodhi@yahoo.co.in,2005-06-29, 16:52:

1. 2. 3. 4. 5.

Create your own RUNSQL cmd for use in CLP. 1. Create a new RUNSQL mbr in srcfile QQMQRYSRC, enter a single line as follows and save it. &V1&V2&V3&V4&V5&V6&V7&V8&V9&V10
75

2. Create a *QMQRY object named RUNSQL from this source - CRTQMQRY QMQRY(QGPL/RUNSQL) 3. Create a CLP pgm named RUNSQL as follws. PGM PARM(&STMT) DCL VAR(&STMT) TYPE(*CHAR) LEN(550) DCL VAR(&V1) TYPE(*CHAR) LEN(55) DCL VAR(&V2) TYPE(*CHAR) LEN(55) DCL VAR(&V3) TYPE(*CHAR) LEN(55) DCL VAR(&V4) TYPE(*CHAR) LEN(55) DCL VAR(&V5) TYPE(*CHAR) LEN(55) DCL VAR(&V6) TYPE(*CHAR) LEN(55) DCL VAR(&V7) TYPE(*CHAR) LEN(55) DCL VAR(&V8) TYPE(*CHAR) LEN(55) DCL VAR(&V9) TYPE(*CHAR) LEN(55) DCL VAR(&V10) TYPE(*CHAR) LEN(55) CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR CHGVAR &V1 %SST(&STMT 001 55) &V2 %SST(&STMT 056 55) &V3 %SST(&STMT 111 55) &V4 %SST(&STMT 166 55) &V5 %SST(&STMT 221 55) &V6 %SST(&STMT 276 55) &V7 %SST(&STMT 331 55) &V8 %SST(&STMT 386 55) &V9 %SST(&STMT 441 55) &V10 %SST(&STMT 496 55)

STRQMQRY

QMQRY(RUNSQL) SETVAR((V1 &V1) (V2 &V2) (V3 + &V3) (V4 &V4) (V5 &V5) (V6 &V6) (V7 &V7) + (V8 &V8) (V9 &V9) (V10 &V10))

ENDPGM 4. Create a command named RUNSQL as follows CMD PROMPT('Run SQL Statement') PARM KWD(STMT) TYPE(*CHAR) LEN(550)

DDS Reference: Printer Files CCSID (Coded Character Set Identifier) keyword Use this file-, record-, or field-level keyword to specify that a G-type field supports UCS-2 level 1 data instead of DBCSgraphical data. Each UCS-2 character is two bytes long. The format of the keyword is: CCSID(UCS2-CCSID | &UCS-2-CCSID- field | *REFC [*CONVERT | *NOCONVERT] [alternate-field-length]) The UCS-2-CCSID parameter is required. Use the UCS-2-CCSID parameter to specify a CCSID that uses the UCS-2 Level 1 encoding scheme for this field. You can specify the UCS-2-CCSID parameter either as a number up to 5 digits long or as a program-to-system field. You must define the program-to-system field with a length of 5 and with the S data type. You can specify a special value of *REFC instead of a UCS-2-CCSID value. It is only valid on reference fields, and you must code the referenced field with a CCSID keyword that specifies a UCS-2-CCSID value. Normally, the printer file CCSID keyword would override any CCSID keyword attributes taken from the referenced field. If you specify *REFC, the UCS-2-CCSID value comes from the referenced field. The *CONVERT parameter is optional and specifies whether the UCS-2 data is converted to a target CCSID specified on the CHRID parameter of the CRTPRTF, CHGPRTF, or OVRPRTF commands. *CONVERT is the default. If you specify the CCSID keyword with *NOCONVERT, the UCS-2 data is not converted to the target CCSID.
76

|If *NOCONVERT is active for a printer file whose DEVTYPE is *AFPDS, |the application must also use either a TrueType font or one of the AFP Unicode |migration fonts. If you do not specify either a TrueType font or one of |the AFP Unicode migration fonts, the output will be interpreted as single-byte |data and will probably be unprintable. |If *NOCONVERT is active for a printer file whose DEVTYPE is *LINE or |*AFPDSLINE, the application must also use one of the AFP Unicode migration |fonts. If you do not specify an AFP Unicode migration font, the output |will be interpreted as single-byte data and will probably be |unprintable. |If *NOCONVERT is active and the file DEVTYPE is *AFPDS, specify a |TrueType font with the FONTNAME keyword, or specify an AFP Unicode migration |font character set and code page with the FNTCHRSET keyword. If the |file DEVTYPE is *LINE or *AFPDSLINE, specify the AFP Unicode migration font |character set and code page in the page definition for the printer |file. If *NOCONVERT is specified for a printer file whose DEVTYPE is *SCS, a diagnostic message is issued when the printer file is used, and the UCS-2 data is converted to the target CCSID. The alternate-field-length parameter is optional and is valid only when you specify the CCSID keyword at the field level and the *CONVERT parameter is active. Specify the alternate-field-length as the number of UCS-2 characters. When UCS-2 data is involved in an output operation and the *CONVERT parameter is active, the data is converted from the associated UCS-2 CCSID to the target CCSID. Generally, the length of the data will change when this conversion occurs. Therefore, you can use the alternate-field-length value to specify a printed field length that is different from the default printed field length. The default printed field length of a 'G' data type field is twice the number of characters that are specified for the field length. The alternate-field-length value can help avoid truncation of field data when the data length will be longer after conversion than the default printed field length. The alternate-field-length value can also help increase the available line space by limiting the printed field length when the data length will be shorter after conversion. The field length will still be used to define the field's output buffer length. For example, a printer file contains the following line: FLD1 10G 2 2 CCSID(X Y) X is the UCS-2 CCSID associated with the field data. Y is the alternate-field-length of this field. If you did not specify Y, then the default printed field length of FLD1 is 20 printed positions (twice the number of UCS-2 characters specified on the field length). If you know that the UCS-2 data is constructed from single byte data, you could specify the alternate-fieldlength, Y, as 5 UCS-2 charact ers; FLD1 would have a printed field length of 10 printed positions (twice the number of UCS-2 characters specified on the alternate-field-length). If you know that the UCS-2 data is constructed from double byte data, you could specify the alternate-fieldlength, Y, as 11 UCS-2 characters; FLD1 would have a printed field length of 22 printed positions (twice the number of UCS-2 characters specified on the alternate-field-length). This allows space for the shift-out and shift-in characters. If you specify the CCSID keyword at the field-level and either the record- or the file-level, the field-level keyword takes precedence. If the you specify the CCSID keyword at the file- or record-level and no G-type fields exist, then a compile error is signalled. On output, field data that is longer than the specified field length is truncated. The CCSID keyword is not valid for files whose DEVTYPE is *IPDS. You can specify the CCSID keyword with all keywords that are currently allowed on a G-type field. Option indicators are not valid for this keyword. Example: The following example shows how to specify the CCSID keyword. |...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8 00010A CCSID(13488) 00010A R RECORD1 00020A FIELD1 30G 00030A FIELD2 10G CCSID(61952 *CONVERT 6) 00010A R RECORD2 CCSID(61952 *NOCONVERT) 00020A FIELD3 20G A FIELD1 is assigned a CCSID value of 13488. FIELD2 is assigned a CCSID value of 61952 and has a field length of 6 UCS2 characters (12 SBCS characters). FIELD3 is assigned a CCSID value of 61952, and the data is not converted during an output operation.

77

Presenting RPG Subfiles in HTML Joe Pluta - 01:01am Jul 1, 2000 PST In the April 2000 issue of Midrange Computing, I wrote an article, entitled HTML: The New 5250, in which I showed how HTTP mirrored the 5250 protocol and suggested that HTML could become the new 5250. In that article, I focused on the endpoint of that transitiona JavaServer Page (JSP) that emulates a subfilebut glossed over the details. Well, Im now going to explain the steps required to get from green to graphical. Architecture Emulating a subfile in HTML is another example of the revitalization architecture that I introduced in previous articles. The idea is to replace I/O operations in the original, monolithic RPG program with calls to an API. The API then forwards the requests to an object that emulates a display file (a display file proxy), and, finally, a user interface retrieves the data from the proxy and presents it to the user. Figure 1 shows the original 5250 protocol. The application program communicates with the device description, which, in turn, exchanges data with the 5250 device. In Figure 2, an intelligent client sits on the workstation and communicates with the display file proxy. This thick client has full GUI capabilities and, if written in Java, can be easily ported to any workstation. The primary disadvantage is that an application-specific piece of code must reside on the workstation. Figure 3 (page 84) shows the thin-client option, which uses HTML as the communication vehicle. Any browser can be used on the client, and no application-specific code needs to be kept up to date. The downside is that the interface is limited to HTML, but an HTML-only interface is perfectly capable of supporting subfile emulation. In both graphical solutions, the client/server APIs communicate with the display file proxy object, so the application client is identical for the two approaches. Im going to focus on the thin-client solution using the WebSphere Application Server, servlets, and JSPs. The Logic of Subfiles To do this, I need first to examine how to program subfiles. There are several different programming techniques, but this article deals with the simplest one, the fully loaded subfile. To load and display this type of subfile, follow these steps: 1. Clear the subfile with a WRITE to the subfile control record that has SFLCLR enabled. 2. Loop through your data, writing to the subfile one record at a time. 3. Display the data using an EXFMT to the subfile control record. Figure 4 shows the data flow of the output cycle. Once the user finishes entering data and presses a command key, the application program reads the data from the subfile using either CHAIN or READC. The HTML Table Figures 5 and 6 show an example of the source and output of a simple two-row table with headings. HTML tables are dynamically created from tags. A table consists of a table definition, which consists of row definitions, which, in turn, contain either column headings or data elements. There are many other parameters to a table, and you should use one of the many HTML editing tools to actually create and format the appearance of your table. The tool creates the tags; all you have to do then is fill in the data between the tags. You may have noticed that the table has only output fields. To make the first cell in the first row input-capable, you replace the data in the first cell with the following HTML input field definition: <input type=text name=T1 value=Row 1, Column 1 Data>

78

Doing this gives you the table shown in Figure 7. Other issues arise when you start talking about input fields. For example, you have to create a form, which associates the input fields with buttons on a Web page. That is outside the scope of this article but is covered in detail in the many excellent HTML books available. You can also learn about creating forms by visiting the World Wide Web Consortiums Web site at www.w3c.org . The JSP Implementation Finally, you have to get the data from the proxy into the table. The cleanest way to do this is to use a JSP. A JSP is, in essence, a fill in the blanks HTML document; the blanks are filled in by calls to a JavaBean. In the JSP/servlet architecture, the bean is populated by the servlet and passed as a parameter to the JSP. When emulating a subfile, the display file proxy is the JavaBean; it contains all the data that would normally be written to the subfile. A bit of finesse is needed to define the table. You know the layout of a single row, but you dont know exactly how many rows are to be displayed. This is where the second feature of JSP, scripting, comes into play. Using scriptlets, you code a Java loop right into the HTML that will execute each row. Your display file proxy bean then needs just a couple of methods: one that gets the next row and another that returns the contents of a field in the current row. Figure 8 shows code that would replace the two hardcoded rows of Figure 4. The getField method must be smart enough to return an HTML input field definition for an input-capable field. The servlet retrieves the data from the fields and updates the subfile when the user submits the page. Execute Code Reformat What youve just read is a step-by-step process for moving a green-screen subfile to an HTML display with very little change to the original application program. The original program replaces I/O op codes with API calls, and the servlet and display file proxy handle the bulk of the conversation from that point on. Only when an EXFMT op code is em ulated does HTML (or, more precisely, the JSP) come into play, and, as youve seen, the JSP coding is really not very difficult. Visit www.java400.net/MC/MC200007index.htm for a complete, working example of an emulated subfile. Application Program Application Client AS/400 5250 Device XXXXXX XXXX XXX XXXX XXX 5250 DEVD Client/ Server APIs Figure 1: The original 5250 protocol featured a character-mode dumb terminal connected via twinax cabling. AS/400 Workstation Thick Client UI Server Display File Proxy Figure 2: In a typical thick-client solution, business logic resides on a powerful PC, and relational data resides on a host system.

79

AS/400 WebSphere JavaServer Page Display File Proxy Servlet Workstation Application Client Client/ Server APIs xxxx xxxx xxxx Web Browser Figure 3: With a thin-client, or browser-based, solution, business logic remains on the host system. 1. WRITE Control (SFLCLR) 3. EXFMT Control 2. Loop WRITE Subfile XXXXXX XXXX XXX XXXX XXX Figure 4: The logic required to populate a fully loaded subfile can be done in three basic steps. <table border='4'> <tr><th>Column 1 Heading</th><th>Column 2 Heading</th></tr> <tr><td>Row 1, Column 1 Data</td><td>Row 1, Column 2 Data</td></tr> <tr><td>Row 2, Column 1 Data</td><td>Row 2, Column 2 Data</td></tr> </table> Figure 5: The new user interface is HTML, and HTML tables effectively replace subfiles.

Figure 6: Even the most basic HTML tables present rows and columns of data with a little more pizzazz than could be mustered with a 5250 subfile.

Figure 7: The cells of HTML tables can be input-capable. <% while (jdspf.nextRow()) { %> <tr>
80

<td><%= jdspf.getField("COL1") %></td> <td><%= jdspf.getField("COL2") %></td> </tr> <% } %> Figure 8: A JSP view bean, such as jdspf, can be used to pull values from rows of a subfile constructed from an RPG application server.

Customizing Your Development with Extensible RPG

by Joel Cochran I introduced you to service programs and the binder source language in my last two articles. Now that you have bound your procedures into service programs, we need to refine our development method. These finishing touches revolve around reusability and make this approach truly complete and low-maintenance. The result is, effectively, your own "extended" version of RPG. The Ties That Bind When we combined all of our procedures into a service program, we ended up with a shorter version of the Create Program (CRTPGM) command, like so: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) BNDSRVPGM(MYLIB/MYSRVPGM) As I discussed in "Service Programs With a Smile," this is much cleaner than listing a bunch of modules on the CRTPGM command. If all of your procedures exist in a single *SRVPGM object, this method will suit you well. This is unlikely, however, and I would certainly not recommend that approach. One common method is to group procedures into service programs based on some commonality, such as function: all the string manipulation procedures go into one service program, all the higher-math operations go into another service program, and so on. Assuming, then, that you will have multiple service programs, you need to list all the service programs that contain the referenced procedures in your program, on the BNDSRVPGM portion of the CRTPGM command: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) BNDSRVPGM(MYLIB/MYSRVPGM1 MYLIB/MYSRVPGM2 MYLIB/MYSRVPGM3 MYLIB/MYSRVPGM4) Now we've graduated to a longer command in the name of maintainability, which strikes me as counter-productive. Fortunately, once again, IBM has provided a better way. Enter the binding directory. The concept of the binding directory is simple. It maintains a list of *SRVPGM and *MODULE objects, complete with the name and library of the listed objects. The compiler then references that list from top to bottom, in search of any referenced procedures in your program. If the procedure is found, the compiler then binds the appropriate object into your program automatically. The size and contents of a binding directory are irrelevant to the created program because only required objects are bound to your program, not every object in the binding directory. The binding directory is just a reference tool for the compiler: The actual bindings will be handled just as if you had typed them into the commands yourself. In other words, if it's listed in the binding directory, you do not have to include it on the CRTPGM command. Instead, you need to reference the binding directory, like so: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) BNDDIR(MYLIB/MYBNDDIR) First, you need to create a binding directory. Binding directories have the object type *BNDDIR and can be created in any library. In fact, your programs can reference as many binding directories as you want, so feel free to organize them in the best way for your application. In my shop, we have one global binding directory and a series of application- or library-specific binding directories. However you decide to organize your binding directories, creating them couldn't be any easier; just issue the following command: CRTBNDDIR BNDDIR(MYLIB/MYDIR) Now that you've created the directory, it's time to add some entries. There are several ways to accomplish this task. The most direct method is to use the Add Binding Directory Entry (ADDBNDDIRE) command. Prompting this command will show you the three attributes: BNDDIR, OBJ, and POSITION. These are pretty self-explanatory, so I'll only point out a couple of things: The default OBJ value includes a type of *SRVPGM, which you will need to change to *MODULE
81

if you are adding a module. I don't recommend using modules this way outside of service programs, but this is a handy way to test your modules first. Also, the POSITION attribute defaults to *LAST, which will only matter if you have multiple procedures with the same name in your directory. Like a library list, the compiler will start at the top and quit the first time it finds the procedure name. As such, it seems futile to have multiple procedures with the same name, not to mention confusing--a situation worth avoiding altogether. The command to remove an entry from a binding directory is RMVBNDDIRE. Prompting this command will reveal its simple and self-explanatory options. Another useful command is Work with Binding Directory Entries (WRKBNDDIRE), which by habit is my preferred method. This command will bring up the entire list of objects within a binding directory and will allow you to add and remove entries from the directory via a simple interface. This is much simpler than the ADDBNDDIRE option and has the added benefit of displaying the creation date and time of the objects listed. Adding entries this way will use the ADDBNDDIRE defaults, but once you have entered 1 to create, and the object name, you can prompt the command with F4. The last and most-functional option is Work with Binding Directories (WRKBNDDIR). You can specify a directory name of *ALL to view a list of all the BNDDIR objects in your library list. This is also a simple way to create a binding directory and provides additional options for displaying a binding directory's entries (DSPBNDDIRE), deleting a directory (DLTBNDDIR), and working with the entries for a directory (WRKBNDDIR E). Embedding the Directory Reference in Your Source From the first discussion of the CRTPGM command in "Service Programs With a Smile," we have worked hard to simplify the CRTPGM command, and one more thing can be done to make it even shorter. Now that you have your binding directory, the command looks like this: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) BNDDIR(MYLIB/MYBNDDIR) And as I mentioned, you can have as many binding directories as you want. So if you need several directories, the command starts to grow again: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) BNDDIR(MYLIB/MYBNDDIR1 MYLIB/MYBNDDIR2 MYLIB/MYBNDDIR3) Once again, you see that this could become a real maintenance headache. Fortunately you can specify binding directories on the header specifications, or "H-specs," of RPG IV source members. Multiple binding directories can be specified by listing them individually: h bnddir('SERVICELIB/SERVICEDIR') h bnddir('CGILIB/CGIBNDDIR') h bnddir('CGILIB2/CGI2BNDDIR') Multiple binding directories also can be specified by separating their entries with a colon (:). h bnddir('MYLIB/MYBNDDIR1':'MYLIB/MYBNDDIR2':'MYLIB/MYBNDDIR3') Now the reference is handled by the individual modules, and once it's embedded in the source code you no longer have to reference the directories when you create the program. Now the command is finally whittled down to something like this: CRTPGM PGM(MYLIB/MYPGM) MODULE(MYLIB/ENTRYMOD) The Lowly /COPY Book That is about as bare-bones as CRTPGM can get, so we have finally reached the end of our low-maintenance strategy, right? Well, not quite yet, but almost. There are still some things we can do to ease maintenance. The first is to embrace our past and take another look at /COPY. Regardless of its naysayers, good old /COPY cannot be ignored. In fact, if you've gotten this far you should already be using it to copy in procedure prototypes. Working on that assumption, you can also put your H-specs in a /COPY. Why would you want to do this? I'll give you an example from my early days of RPG IV adoption. One of my first tasks was to convert an entire application from RPG III to RPG IV. After the initial conversion, I learned that Debug could be a real pain without using OPTION(*NODEBUGIO). I wanted to be sure that every program had this option, so I put one of my programmers to task inserting this H-spec into all of the application's source members. In an instance of unfortunate timing, I was busy learning the glories of binding directories, and so just as that person was finishing up I asked her do repeat the exercise and insert the BNDDIR statement for our new binding directory. The smoke that came from her ears prompted me to find another way. I found that way in the lowly /COPY member. By using /COPY to set my H-specs, I now have a single point of entry and can guarantee some semblance of consistency and enforcement of compilation rules, as long as each member uses that /COPY. This method works great for any items that you want to include by default in your source.
82

I mentioned before that I have a global binding directory and then a binding directory for each application: I follow the same model for /COPY books; I have a set of global /COPY members that I reference in every RPGLE source member I create, and then another one for each application. The template I use to create new RPGLE source members starts off like this: /copy rpgnextlib/qrpglesrc,cp_hspecs * Prototypes /copy rpgnextlib/qrpglesrc,cp_longprs * Constants /copy rpgnextlib/qrpglesrc,cp_const By referencing all my procedures through a binding directory, found in the cp_hspecs /COPY, and including all the prototypes in the cp_longprs member, I always have access to all of my procedures. I've even gone a step further: I found myself repeatedly creating the same constant variables over and over from program to program, so I started putting them in a single /COPY as well, cp_const. The result is that I have basically created my own personalized version of RPG, complete with my own built -in functions and constants. When used in this manner, RPG is essentially "extensible." And that's only for starters: I can continue extending the language for each application by adding application specific BIFs and constants, again pulled into all the application source members via /COPY. A quick caveat: I have frequently simplified this even further by nesting /COPY statements in other /COPY members. However, when I started writing SQLRPGLE programs, I quickly learned that the SQL precompiler will not allow nested /COPYs. Also, I know many programmers express a certain level of angst over /COPY, and frequently with good reason based on their experience. I would like to make it clear that I do not advocate using /COPY to clone program logic, subroutines, or subprocedures. Doing so defeats the purpose of modularized code because you still end up with multiple copies of the same source (granted in the program object), instead of one centralized object. Since our goal is to decrease maintenance work, that would be a hindrance, because now in the event of a source change every module using that /COPY must be found and recompiled. There are other problems as well, such as variable definitions, that make this an unsound approach. Putting It All Together The best part of this whole approach is that, once established, it is very low maintenance. Here are some key points to remember: ? Put all of your subprocedures in service programs created with binder source. ? Reference all of your service programs via binding directories, and do so in H-specs. ? Standardize your H-specs, prototypes, and constants in /COPY members. I am confident that adopting these techniques will make your maintenance simpler and your programming time more productive, not to mention that you, too, can have your own version of RPG. Question How to initialize all the indicators in one shot in RPGLE ? Answer: C MOVEA *all'1' *IN(25) The above statement means that from *in25 to *in99 make all the indicator value = 1 C MOVEA *all'0' *IN(0) The above statement means that from *in0 to *in99 make all the indicator value = 0. This is a way how we can initialize all indicators in one go. C MOVEA 101 *IN(25) The above statement means that make *in25 = 1 , *in26 = 0 and *in27 = 1 List Of Interview Questions 1. 2. Have u at anytime interacted with the client directly and produced design specifications? What is the process followed in the project u are currently working on?Who produces the design specifications?
83

3. How do u know a record exists without doing read and chain? 4. What will be ur approach in going either for OPNQRYF or Logical files. Which one to go for? 5. How do u manage that the records doesnt get locked by a CHAIN or READ operation?(This he asked indirectly by taking a subfile selection screen and another update screen) 6. How do u call procedures in ILE? 7. What is the syntax of passing parameters to a procedure by value? 8. Have u used binder language like coding export to procedures ? 9. How do u code file field renames in ILE RPG? 10. How many months u have worked in CHLY2K project? 11. In ProgramB there is a submitjob ,which is a call to program C .There is also a CALL to program D from B. How would u check the program C has been executed in D? Started with display files. What are the key words you must use when using a subfile. What happens when SFLSIZ > SFLPAG. What are the advantages and disadvantages? What happens when SFLSIZ = SFLPAG. What are the advantages and disadvantages? Few more questions related to above questions. RPG What is the difference between DO WHILE and DO UNTIL How do you find whether a record is locked or not. Difference between RPG/400 and RPG4 Is there any thing advantage for date variables in RPG4? DO you used data areas? Difference between SETON LR and Return? What is the significance of *INZSR routine? When and how many times the *INZSR routine will execute? What type of code you usually put in *INZSR? How the *INZSR routine will effect when using either SETON LR or Return How will you write detail record without using logic cylce? 5 or 6 questions related Logic cycle and reports.

CL How do you find whether a job is a batch job or interactive? How many batch programs do you coded? Others Do you used message subfiles? What you have to do in the display file when you are using message subfile What is the percentage of batch programs in the total number of programs you coded so far.

? ? ? ?

How to set on/off a group of indicators in a single statement. Difference between chain and reade. Difference between cpyf and crtdupobj Program cycle
84

? ? ? ? ? ? ? ? ? ? ?

Debugging in batch job What is Local Data Area How to access and define data areas in program What is the function of g in exception message? RPG LOGIC cycle. What is the driving file. How to distinguish it from other input files. What is the opcode used for reading a changed record from subfile. What is *PSSR ? What is its use?. What are all the conditiones required for using OPEN opcode on a file? What is JOBD ? What are all its parameters? What is Subsystem and its parameters Types of Libraries in AS/400.

1. How can you determine the number of characters in a variable? Let's consider a variable of length 20. Move value 'ABC' to it. How to determine how many characters does X have? 2. Suppose you have 3 members in a database file. How to read records from all the members wit hout using CL (OVRDBF) ie it should be handled exclusively in an RPG program? 3. In a CL program, how do you retrieve system date in Y2K compliant format? 4. You made some changes to a database but you don't want to save those changes now. How do you handle this? 5. How do you define a logical file which has a subset of columns from the physical file? 6. How to define all the fields of a PF in an LF? 7. What is a composite key? Why is it necessary? 8. How do you do indexing in a physical file? 9. Few questions regarding the join logical files? 10. I was asked 6 to 7 questions in SQL - related to SELECT UPDATE JOIN etc., 1. PF contains 50 fiedls how can u update only 2 fields? 2. what is the purpose of O & L spec 3. What is the record address file how can u define in RPG 4. Can indexed file be accessed in arrival sequence in RPG & how 5. when will DUMP & DEBUG be ignored 6. what is the diff. betwen non display attribute & hiddeen fields 7. how can u dist. arrays & tables 8. what is the necc. command needed before OPNQRYF & why 9. can u copy the records created by the OPNQRYF to other files & how 10. what u mean by a input subfile, what r thye keywords required SFLINZ & SFLRNA. 11. how can u display specific subfile page on the screen. -SFLRCDNBR 12. what is he use of SFLCSRRRN 13. What is the necc. keyword needed to scroll a subfile recordS 14. how can u find records existence in database file without causing I/O operations. - *RECORD, DSPFD CL Command. 15. what is the function of POST code 16. what is the diff. between data area & data queue 17. how can u test batch pgm using interactive source debugg. 18. what is the default data type for the numeric fields in pf & rpg 19. what is the diff between packed & zoned fields 20. expalin ddm & icf 21. about ftp? 22. can u debug ile rpg pgm. using isdb 23. about compiler direct statement 24. diff between sflclr & sflinz 25. diff, ways to pass data between pgms. which is the efficient way
85

26. use of lvlchk 27. dynslt 28. how can u dist. access path & dunamic select. 29. why would u prefer opnqryf than lf 30 when would u prefer lf 31. about field ref file 32. a fied ref file contains date fields and suggested not to expand but it requires 4 digits how can u achieve it 33. what is the use of RGZPF 34. what are the different types of accesspaths maintained on the file? 35. What are the necessary keywords required to code a message subfile? 36. What is the purpose of FRCDTA keyword? 37. What is the purpose of PUTOVR keyword? 38. Is it possible to create physical file without DDS? and how? 39. What is the purpose of indicators in RPG? 40. Explain about CUA and SAA? 41. What is class of service? 42. How many libraries can be there in a library list? (25) 43. What is folder? 44. What is Job Description? 45. What is Group job? 46. what are the two main attributes which govern the execution of a job? 47. What is a device file? 48. What are two types of read performed on Data Queues? 49. What is an authorization list? 50. How to change file attributes such as file size, file wait time, record wait time etc., permanently? 51. How do you detect unused spool storage? 52. What are the functions of remote job entry? 53. What is journalling and commitment control? 54. What is the purpose of Panel Groups? 55. How can a screen field that has changed since the last output operation be detected? 56. What would be the effect on the field where reverse image, underline, and high intensity? 57. Can more than one subfile record be displayed on one line? 58. What is the Arrary operation? 59. How do you use commitment control in RPG? 60. How do you achieve exceptional write in C-spec. 1. Have you ever maintained any of the system what you are having, In that process what are all the tasks you have done 2. Have you created any User profile on your machine, What are all the parameters you specify while creating user profile ? (Question is about user class, group profile, Job description, Initial program, initial menu and Special authority) 3. Explain about securit y levels avaibale on AS/400, What is the highest level of security, what it will do, and what is your machine security level (Right now in which security level you are working) 4. How do we grant authority to the object, What type of authorities we can give to any of the object 5. There are three to four questions based on the authority checking mechanism, They are all based on situation, but I am not able to recollect those questions (They are mostly of type Yes or No). 6. Have you ever worked with Client Access, What you did using client access.

? ? ? ? ?

How do carry out testing and what are the various methodologies we use in testing. Are we responsible for acceptance testing and how do we carry out the same. How do you analyze. How do you gather User Requirements. How do you breakdown the job of analysis in to simpler sections so as to make the job easier.
86

? ? ? ? ? ? ? ? ?

Who was the toughest user you came across and how did you convince him to co-operate with you. What do think are the various reasons for which an user may not co-operate with you. Who is an easy user. There were certain other questions which were based on the what I did at SAIL and various projects I did at MGS. They were based on what the projects were and what all was involved in it and how did I manage it. What are my roles and responsibilities . How do I track them and how do I report. What are the various milestones in the project and how are they reported. Such like other questions.

? Have you ever worked in an accounting package. ? Do you have an idea of accounts ? Do you have any idea about the insurance and insurance companies ? Do have an idea about the American insurance system and how it works. ? Have you ever worked on a system in which there is a continuos transfer of data from one machine to another machine. (Not FTP, nor necessarily from one AS400 to other).

1. What is primary key ? 2. What is foreign key ? 3. Example for many to many relation. How do you resolve many to many relation ? 4. If you have student file containing information about student and class file containing information about class , then what are all the fields identified as primary key and how many databases can be designed to retreive what student attend what class ? Consider any student can attend any class. . 5. What are all the questions can be asked to the User for designing a report based on his request ? 6. If User asks for incorrect report how do you react.

1. How do you manipulate data in CL ? 2 How do you track messages in CL program ? 3. What is OPNQRYF and what command you specify before OPNQRYF ? 4. What happens if you don't specify OVRDBF before OPNQRYF ? 5. What is level check ? 6. What is subfile and how do you access subfile in RPG ? 7. How return varies with CL and RPG ? 8. What does return and SETON LR does ? 9. Can you use *INZSR subroutine more than once and when it gets executed ? 10.How do you access files in RPG ? 11.What is data structure ? 12.What opcode you use in RPG and RPGLE to convert date ? 13.What all can be defined in D spec in RPGLE ? 1. What are different kind of testing ? 2. What is normalization ? 3. What is Visio ? 4. Steps involved in SDLC ? 5. What are things specified in SRD ? 6. What are things considered to prepare test forms and things specified in test forms ? 7. What testing you follow to test program ? 8. She gave me requirement and told me to ask her question to prepare SRD ? You complete Requirement and Analysis at onsite and come back. If you still have some queries to resolve, how will you resolve them? Half way down the development stage, the user changes the requirements and scope of the system. How will you react to it?
87

What is the difference between "Logical" and "Physical" diagrams? What are DFD's? In all the stages of the Project Life Cycle, what methodology do you follow for the system development (Water Fall and Spiral) How do you perform the Testing of the system? What are the different documents you prepare during the development of the System (I.e during the Req. Analysis, Design etc.). Tell them about the BFS, FDS and DDS (documents prepared if you follow Metamor Quality Process). How do you estimate for a task/project? CASE STUDY : You have to develop a system which tells you how much Barbers are needed in Calif. Wwhat questions will you ask during RA Just the name of the system to develop is known to you. How will you proceed? With what frequency will you interact with the client during RA? How do you do the Data Modeling/Process Modeling? Delivery of the system is to be made in 4 days. A major issue comes up. No body it there to resolve it at onsite. What will you do? During the task allocation, what is the involvement of the team members (in estimation etc.) How do you track the completion of the task by the team members? Ref file and usage Activation group and commitment control SQLRPG compilation If worked with Client-AS/400 Interface FTP - Default user password Data-Modeling Should be derived fields be kept in the Physical files? Why is knowledge of functional specification necessary. Why will it effect your work if you are not given the knowledge? Frequency of Os-shore & On-site co-ordination What will you do There is some ambiguity in the specifications and you have a timeline to meet What is primary key? What is foreign key? Example for many to many relation? How do you resolve many to many relation? If you have student file containing information about student and class file containing information about class, when what are all the fields identified as primary key and how many databases can be designed to retreive what student attend what class? What are all the questions can be asked to the User for designing a report based on his request? If User asks for incorrect report how do you react?

88