Vous êtes sur la page 1sur 34


If the expected result did not match with actual result, then it is termed as a BUG.

From the development side, the application is got in the form of build. From requirement document, test cases are prepared. Test cases consist of the expected result. In the received build, execute the test cases. If test case pass, expected result = actual result. If test case fails, expected result actual result. Incase of test case failure; consider it as a bug and the bug is logged in a bugtracking tool.


To avoid any confusion and to track all the bugs , they are logged into the bugtracking tool. In a project, it is mandatory to log the bugs in the bug tracking tool. Bug tracking tool is a central repository for tracking all the bugs. All the project members can use it. The project leads and managers can easily view the bugs and give necessary order of priorities as required. Since bugs are logged in, there is no possibility of missing them. It also helps in documenting the bugs identified and fixed. The developer can use this document for defect prevention analysis and reduce the rate of bugs. The bugs from the testing side and development side can be compared and hence can increase the effectiveness of unit/integration/system testing.


1. Bug ID - it is a unique number given to each logged bug. 2. Bug title - it is a single line description about the bug. 3. Bug description - it is a detailed description of the bug which consists of Module Screen name Function name Related CR numbers Step by step actions done. Expected results. Actual results. 4. Assigned to shows the person to whom the bug is assigned. 5. Tentative date date (subject to change) in which the bug has to be resolved. 6. Severity shows the impact of the bug on the application. Show stopper very high severity High high severity Medium medium severity Low low severity Cosmetic very low severity 7. Priority shows how fast the bug has to be fixed. Low Medium High Very high Urgent 8. CC list list of persons to whom the mail has to be sent. 9. Status shows the current position of the bug. It has different values; New Open Rejected Deferred Duplicate

Answer only Fixed Released Closed Re-open

10. Attachments any documents like requirement docs, screenshots etc., 11. Comments shows any comments regarding bug fixes, modifications etc.,


NEW: The test engineer newly logs a bug into the bug tracking tool, sets the bug status as NEW and assigns it to the development lead. OPEN: The development lead receives the bug with the NEW status. If the bug is valid, the status is changed as OPEN and the lead assigns it to a specific developer. REJECTED: The development lead after receiving the bug in the NEW status verifies whether it is valid or not. If the bug is invalid, the status is changed as REJECTED and assigned back to the test engineer. DEFERRED: Once the development lead receives the bug in the NEW status, the bug is analyzed. If the bug is valid but requires a lot of code/design changes that cannot be done in this release (or) low severity bug (or) any other problems, the bugs status is changed as DEFERRED. Such bugs have less priority for that release. DUPLICATE: When the development lead gets a bug in the NEW status, it is analyzed. If the lead finds that the bug with the same issue has been logged already, the status of the bug is moved to DUPLICATE and assigned back to the test engineer. In the comment section, the other BUG ID has to be given for reference. ANSWER ONLY: When the development lead receives a bug in the NEW status, the bug is analyzed and if identified as it is not a code issue but a network/ environmental issue that can be rectified easily. The status is moved to ANSWER ONLY and assigned back to test engineer. In the comment section, the thing to be modified in the environment has given. FIXED: Once the bug is moved to the OPEN status, the developer thoroughly goes through the bug description - try out the same steps identifies the code issues - fixes it - unit/integrate test it - changes the status as FIXED - assigns the modified code to the build engineer. 3

RELEASED: Once build engineer receives the modified code generates a build deploy it in the testing environment sends to the test lead the test lead assign the build to the concerned test engineer in the stage, the bug will be in the RELEASED status- the test engineer retest it if pass, status is CLOSED if fail, status is REOPEN. CLOSED: Test engineer once receives the bug in the RELEASED status, reads the description of the bug completely and re-executes the same test cases. If test cases pass, the bug is fixed and so the bug status is moved to the CLOSED status. The comment section has to be updated by the test engineer as retesting done. RE-OPEN: i. When Test engineer receives the bug in the RELEASED status, reads the description of the bug completely and re-executes the same test cases. If the test cases fail, the bug is moved to the RE-OPEN status and sent to the development lead. Once the development lead receives the bug, the status is changed as OPEN. ii. When the bug is assigned to the test engineer in the REJECTED status reads the comments and validate them if valid, the bug is moved to CLOSED status if not, bug status is changed to RE-OPEN and comments are updated by the test engineer sent back to the development lead. iii. Same process is followed for ANSWER ONLY and DUPLICATE status. iv. Bugs with DEFFERED status come to OPEN status in appropriate release. PRIORITY: It defines how important the bug fix is and how fast the bug has to be fixed. SEVERITY: It explains the impact of the bug on the application. RELATION BETWEEN SEVERITY AND PRIORITY: The both are not directly linked but there are 4 cases. They are;

High Severity High Priority showstopper bug to be released in the current release. High Severity Low Priority showstopper bug but due to valid reasons can be fixed in next release. Low Severity High Priority not a showstopper bug but has to be fixed in this release. Low Severity Low Priority cosmetic bugs. Note: Priority changes with time and release. Although priority changes, severity doesnt change.


RELEASE: Release is the period through which certain conditions such as a. b. c. d. Completely new functionalities. Changes to the existing functionalities. Bug fixes of the previous releases. Bugs deferred in any of the previous releases.

are analyzed, planned, estimated, designed, coded, integrated with the existing functionalities. All are end to end tested and delivered to into the production environment. A release can be Change Request (CR) release for conditions a & b. Bug fix release for conditions c & d. Can be both CR and bug fix release. Release process is done in a sequential procedure. PROCEDURE: STEP 1: The release manager first interacts with the client and identifies the list of Change Request that has to be to deliver in the current release. The selection of the CR will be based on the factors like usage, priority, budgets etc., STEP 2: Requirement Gathering Specification (RGS) Sessions are conducted based on the CR decided in the step 1. All the requirements are discussed with client and analyzed. STEP 3: After analyzing the CR, they are documented. That document is called as Release Notes. If any production or pre-production bugs are present in the last release, they are also added to this release notes.

The release notes consists of the complete list of CR and Incident Reports (IR) that are to be delivered in the current release. This release notes is reviewed thoroughly and sent to the clients approval. Once the client approves it, the document is free zed.

STEP 4: Using the release notes, the leads/managers prepare the plans and estimation. Based upon the estimation and resources available, the release date will be decided. The client once approves the date, the release notes and release date are circulated among the project team. STEP 5: The project members have to be completely clear about the total request and date of the release. So, Internal sessions are conducted for every group in the in the project team. STEP 6: The architects prepare the design documents using the release notes. The design documents can be updated and version controlled. Based upon the CR and bug fixes, all the three (front end, application and data base) design or any one specific design can be prepared. STEP 7: Once the architects prepare the design documents, they are given to the developers and test engineers for understanding.

The IR or CR is given to the developer for reference. With the help of the release notes and design documents the developer will get a clear idea about developing the application. If the application has to be newly started, a dummy file is added to the Code Repository (COR) and then checked out. If it is a code change in the existing application, it is checked out as such. If the developer has finished the coding or has completed modification in the code level, the 7

The test engineer thoroughly understands the complete list of IR/CR and design documents. Using those documents, they prepare Requirement Traceability Matrix (RTM). In the RTM, CR ID, CR description fields and business rules are updated. The RTM is thoroughly reviewed and clarified with the clients. If any doubts or assumptions present, they are checked and freezed. Using this freezed document, test cases and scenarios are prepared. The test plan columns (test case ID,

code is unit and integration tested. The found bugs are fixed and certified in the code level and then checked in. After the code is reviewed successfully, the build engineer is intimated. The build engineer collects all the latest version code of the application and generates a build. The latest generated build is deployed in the testing environment and intimated to the testing team through mail. The build engineer also prepares a notes called as build notes. The build notes contains the IR and CR of that particular build.

description, test data, expected results) are filled. This document is reviewed and freezed. The test case IDs are updated in the related columns of RTM. Before build is deployed in the testing environment, the test engineer has to finish the RTM, test case and scenario preparation. Once the build is deployed in the testing server, the build engineer send out a mail to the testing team and then first cycle starts.

STEP 8: The build engineer attaches the build notes along with the mail. Test engineers conclusion by reading the build notes: If the build notes contain only CRs, functionality testing has to be done. If the build notes contain IRs, re-testing has to be done. If the build notes contain both IRs and CRs, perform both functionality and retesting. If the application is web-based application then close the current browser, delete the temp files and cookies. After that, open a new browser with the same URL. If the application is an executable one, download the latest build from the location details given by the build engineer in the mail to the local system and install it. STEP 9: The test engineer opens the new build and performs the smoke testing. If the smoke testing fails, the test engineer sends out a mail to the development lead and waits till the next build comes in. If the smoke testing is a pass and if all functionalities are working fine, then the test engineer performs functionality or/and re-testing based on the build notes. Once the functionality and / or re testing is completed successfully, the test engineer performs regression testing. Regression testing can either be performed by manual or automation. If any bugs are identified, log the bug in the bug-tracking tool. Once all the testing is completed successfully, the test engineer prepares the necessary reports and the cycle is closed. The number cycles increases until the clients requirement is met.

STEP 10: If there is a scope for non-functionality testing, the test engineer deploys the build in the performance environment. If not, the latest build is deployed in the UAT (User Acceptance Testing) environment. The client performs the testing in the UAT and if any bug is identified, the cyclic process of bug logging bug assigning bug analysis check out bug fixing unit and integration testing reviewed check in build engineer is informed get latest version compilation deploy in testing server send a mail to testing team along with build notes testing team performs smoke / regression / re testing promote the build to performance testing (based on the application)- performance tested deploy in UAT environment. This cyclic process carries on till the build meets the release requirements. This process has to be completed by the production deployment date. On the production deployment date, the release is opened and the latest certified build is deployed in the production environment and monitored. The next release process starts.


BUSINESS RULES: These are the set of rules by which an application works. An application contains n number of functionalities with n number of fields. Those fields have n number of business rules which can either be positive or negative. Testing completes only when all the business rules for all the fields in the application are validated. Business rules are many in number and so its not advisable to write test cases referring them directly since some items may be missed. So, the test engineer prepares another document before writing the test cases. This document is called as RTM (Requirement Traceability Matrix). From the SRS (Software Requirement Specification) document, RTM is prepared. From RTM, test cases are written. RTM TEMPLATE: The basic RTM template comprises of the following fields: Requirement ID. Requirement description. Fields present in the functionality. Business rules. Related test cases. Status. Comments.

Requirement ID: It is the unique number given for every requirement. This number can be taken from the SRS document. Requirement description: It is a two liner about the application. It can also be derived from the SRS document. Fields present in the functionality: It is the list of fields present in each of the functionality.


The details about the fields are taken from the GUI (Graphical User Interface) design document. Business rules: Every field has certain set of constraints. It is the Definition for Those constraints of the particular field. Related test cases: After filling the above fields in the RTM, the test engineers prepare the test cases. After preparing the test cases, the requirements and the test cases are mapped and the related test cases are filled in the RTM. This can be used to trace the test cases for a particular requirement. Status: The status of the requirement can either be a PASS or FAIL. After writing the test cases, the test engineers will execute them on the application. If all the related test cases of a specific requirement in the application are successfully executed, then the status is a PASS. Even if one test case for a requirement fails, then the status requirement is a FAIL. Comments: If the test engineer wishes to add any comments, it is updated in this column. RTM PROCEDURE: Step 1: First the test engineer goes through the requirements document (RGS) and then understands the total requirements / functionalities / change request / work request. Step 2: Then the test engineers understand the GUI design document to know about the list of values in that particular CR. Step 3: RTM is prepared by referring SRS and GUI design document. For the columns Requirement ID and Requirement description, the details are got from the SRS. For the column fields, the details are taken from GUI design document. Step 4: For every field present in the application, the 6 constraints in the business rules are derived and implemented. If the derived business rules are already present in the SRS, note down. If not present, derive them and such items have to be sent to the client. Step 5: For each of the business rules derived, prepare a test data. The duplication in the test data is removed. The final set of test data are validated and consolidated. 11

Step 6: With the help of the test data, the test engineers prepare the test cases. The columns test case ID, description, procedure, Test data, expected result are filled. Step 7: The test cases are thoroughly reviewed and freezed. The test case ID is filled in the related test case column in the RTM. This will help to find out a. Number of test cases written for each requirement. b. Whether test case belongs to the specified requirement. Step 8: Once the build is received, the test engineer starts executing the test cases in the build. The actual result column is updated. Based upon the actual result, the expected result column is filled in parallel when the test cases executed and validated. Update the status column as pass, if the actual result is equal to expected result. If not, update as fail. Repeat this procedure for all test cases. Step 9: Update the status field in the RTM document. If all the test cases related to the CR in the RTM document is pass, enter the CR status in RTM document as pass, even if one test case fail, update the status as fail.

1. All the BRs for each and every functionality and fields are covered in the RTM document. 2. Helps in Forward and Backward traceability. FORWARD TRACEABILTY: It is nothing but tracing functionality to test case. BACKWARD TRACEABILITY: It is nothing but tracing test case to functionality. 3. The current status of the testing can be identified easily. 4. The RTM also gives the status for when a test case can be stopped. 5. We can easily trace the test case for particular BR using RTM. 6. We can easily identify the number of passed and failed test cases.


RTM TEMPLATE TABLE: S. No 1. Req. ID Login1 Description Fields User with valid user name and password can access Username: Password: Ok Cancel Business rules Username: min of 6 and max of 10 characters Password: must not have * symbol Related Status test cases Commen ts


Test Cases
Test cases are prepared by the test engineers to check whether the application is working fine or not. The template of the Test case document is: 1. 2. 3. 4. 5. 6. 7. 8. Test Case ID Test Case description Test Case Procedure Test data Expected results Actual results Status Comments

Test Case ID: The unique identification number of test case is Test Case ID. It is not compulsory to use only numbers in ID but also alphanumeric characters can also be used. Example: Login_01, Login_02, IO_10 Test Case Description:


A two-line description about the Test case is called test case description.

Procedures: This is the complete step-by-step procedure to execute the test cases. The author of the test case document always need not be the executer. So, clear procedure without skipping the steps have to be written. Even the minute things to be done have to be mentioned clearly. Test Data: The input data that is given while executing the test case is called the test data. Using different constraints test data can be generated. They are: 1. 2. 3. 4. 5. 6. BVA (Boundary Value Analysis) EQ (Equivalence Partitioning) Null/Not null Mand / Non Mand Numeric / Alphanumeric / Special characters. Any other constraints

Boundary Value Analysis (BVA): This technique is applied to cases like min. or max. Value. Example: User name should be minimum of 4 characters and a maximum of 10 characters. For minimum, assume min = m. For maximum, assume max = n. Six different test cases can be written for this requirement. They are m-1, m, m+1, n-1, n, and n+1. Note: In some cases, only one limit value will be given. If so, only 3 test cases can be written. Equivalence Partitioning (EQ): This technique is also applied to cases like min. / max. Value. BVA analyses the cases only within the boundaries whereas EQ concentrates beyond boundary values. Example: User name should be minimum of 4 characters and a maximum of 10 characters. For minimum, assume min = m. For maximum, assume max = n. Three different test cases can be prepared like m-1, (m+3 or n-3), n+1.


Null / Not null: This technique is used to check whether the field accepts null / not null. Null 0 or space. Not null no data. Here, null is a positive test case and not null is a negative test case. Two test cases can be written for this case. Mand / Non Mand (mand-mandatory): Based upon the application, there will be mandatory and non-mandatory fields. In such cases, If the field is mandatory, two test cases are written. One will be a positive one and another will be a negative test case. If the field is non-mandatory, two test cases will be prepared. Both are positive test cases. Numeric / alphanumeric / special characters: Based upon the field in the application, the following rules have to be applied. Example: password field in the login page. Any other constraints: Apart from the above five constraints, the fields of the requirements may have various business rules such as a. Format. Example: date of flight (mm-dd-yyyy) b. Characters Sequencing. (Date of flight, fly from, fly to) Expected result: This section explains about how the application should perform. Based on the Release notes the expected results are finalized. Actual result: This section is to fill the result of the execution of the test case. After build is received, the test engineer executes the test case as per the procedure given and based on the result got from the application; the actual result column is updated. Status: The difference between expected result and actual result is called status. This can contain only two values Pass or Fail. If the actual result is equal to expected result then the result is a pass. If the actual result is not equal to expected then the result is a fail. Comments: If the test engineer wishes to add any comments, it is updated in this column. S.n o Test Description Procedure Test Expected Actua case data result l Status Comments


ID 1. 2.


If the test cases are executed and got passed, the test engineer cannot come to conclusion that the whole functionality is verified completely. The requirements might be working fine while they are tested individually, But when the functionalities are combined and tested they may fail. To avoid these conditions, scenarios are created and executed. The template of scenarios documents contains the following fields. i. ii. iii. iv. v. vi. vii. viii. Scenario ID Description Procedures Data Expected results Actual results Status Comments

A collection of one or more functionalities is called a scenario. Clear domain knowledge is required for preparing a scenario. For effective scenario preparation, the test engineer should think in the perspective of the end users. Test scenario Template: S.n o 1. 2. Advantages: In real time the end user will not be using only a single functionality in a single transaction. He might me using more functionality at a single shot. If we test only based on the test cases we might be missing to test combination of functionalities. Combination of functionalities is called Scenario. Complete process: 16 Scenario Description Procedure Data Expected Actua case ID result l Result Status Comments

i. ii. iii. iv.

SRS preparation PM plan Development plan, Test plan GUI design doc, HLD, DB design doc Preparation of RTM, TC and Scenarios a. With the help of SRS and GUI design document we will list out the requirements b. Fill the Requirement Id, description, Fields and Business Rules. c. Then we will write test case document. d. Fill the columns TC id, description, procedure, preconditions, test data, expected results. e. Then we write the test case ids corresponding to a particular requirement in the Related test cases field.

After getting the build. i. ii. Read and analyze the build notes There may be three cases a. New functionality b. Existing functionality c. Bug fixes Identify the test case ids Execute the test cases Update the Test case document with pass or fail in the status column If all the test cases related to a particular requirement are pass update the status column with Pass. Otherwise Fail. If some bugs are identified, log them in the bug-tracking tool. Repeat the process, once we get the next build till it is certified.

iii. iv. v. vi. vii. viii.



BUILD: Build engineer on intimation from developer will get the latest version of all programs from code repository. Compile them and will generate the executable. This executable is called build. CODE REPOSITORY: Developer maintains all the programs in a centralized server called as Code Repository. If a program is already present in the Code Repository, the developer will check out and make necessary changes based upon IR and CR and check in the code. Whenever check out and check in happens, a new version is created. Developers prepare the unit and integration test plan along with the test cases using the release notes. PATCH: Developers will check out / modify the code / check in and it will be intimated to the build engineer by raising a patch. Patches can be raised using a tool in which the developer logs in and opens the home page. Details to be given: 1. Programs that are checked in. 2. The version to be considered for every program. 3. Changes related to the CR or IR number. 4. Click on the submit button. 5. When the submit button is clicked, two activities happen. Unique patch number is generated. This patch number helps in future to find out what is present in the build. Email is sent to the build engineer with all the details. Planned Build: Build engineer will collect all the mails and on a planned date, the build engineer will get the versions of that program, compile and generate them. If the build fails due to compilation error, it will be intimated to the development team along with the error message through a mail then the developer will analyze the failure.


Unplanned Build: Unplanned Build happens in the situation like Build failures Showstopper bugs identified in the testing Any important thing to be immediately tested Problems in Unplanned Build: Testing will be impacted and so a new build has to be again generated. Developer has to rework Testers has to rework High amount of effort wastage Due to these factors generating build without planning has to be avoided. BUILD GENERATION AND DEPLOYMENT: After the Build is successfully generated it will be deployed into DIT (Development Integration Test) environment. Build engineer will prepare a document called as Build notes. After deploying the Build and generating the Build notes, the build engineer will compose a mail along with the Build notes, Build number, location of the build deployed, URL (web page) and will send it to the testing team. On receiving the mail, the test engineer will come to know that the Build is deployed into their environment and they will start testing. To start testing, it is necessary for the test engineer to read and understand the Build notes. If the Build notes contain CR, the test engineer will perform functionality testing. If the Build notes contain IR, the test engineer needs to perform re-testing. By default, the test engineer has to perform smoke testing first and regression testing in the last for every Build. During this test cycle, if any of the Bugs are identified in the above testing the bug enters into the Bug tracking tool. Functionality testing, retesting, regression testing can be done only when smoke testing is a pass. If smoke testing fails, intimate to developer. Developer will analyze check out modify - check in - raise patch - send to the Build engineer generate Build Deploy and intimate to testing team to start testing. BUILD PROMOTION: If the build tested in one environment is a pass, the build is moved to the next environment. No new build will be added during promotion. The build that is certified in a particular environment will be deployed in the next environment along with the build notes.


DIT ENVIRONMENT (Development Integration Testing): Bug identified in DIT environment Bug logging / reporting in bug tracking tool assigning to developer Bug analysis checkout - raise patch - collect all patches till planned date generate Build in planned date deploy to DIT environment generate Build notes - send mail Read Build notes define testing scope - start testing -identify Bugs promote to next environment.

SIT ENVIRONMENT (System Integration Testing): Build is deployed to SIT by Build engineer prepare build notes - send mail - read build notes start testing. If Bug is identified log it into Bug tracking tool and complete cyclic process is repeated. UAT (User Acceptance testing): Code is promoted to UAT (User Acceptance testing) build deployed into UAT by Build Engineer Prepare Build notes send mail - read build notes start testing. If the Bug is identified then the complete cyclic process is repeated. If there is no bug then the Build is certified and promoted to production environment. PRODUCTION ENVIRONMENT: Deploy the latest UAT certified Build into the Production environment. Allow the End users to access the product and perform Live transaction using live data. Note: No codes enter into DIT environment without certification from developer. No codes enter into SIT environment without certification from DIT. No codes enter into UAT environment without certification from SIT.


To produce a quality product, certain procedures and processes have to be followed in every step of the SDLC (Software Development Life Cycle). To ensure quality of a product, it is thoroughly tested and various other processes like bug tracking RTM scenario and test cases, build generation and deployment are followed. Testing is not started after the coding phase but it is started at the early stage of the SDLC. Testing is broadly classified into 2 categories; Static testing Dynamic testing Static testing: It is the process of checking whether the documents prepared in the SDLC process meets the clients requirements.

Static testing is performed in the form of REVIEW. If static testing is not done, the mistakes in the document will move into the application and hence will cause a high impact. So, it is better to perform static testing in all the documents that are prepared in the SDLC. All the documents have to be thoroughly reviewed before giving those as an input to the next phase.

Review: It is the process of re-checking all the reports once again and identifying the missing things. Review has to be carried out in all reports before delivery irrespective of it may be a document or a code or any such. Review is done in 4 levels: Self-review: 21 Self review Peer review Lead / manager review Client review

This is the process of verifying the document by the author him / herself to find out the missing parts in it. After the self-review, the review comments are updated and are sent for the peer review. Peer review: The peer with a different perspective reviews the self-reviewed document. The peer reviews the document and updates the review comments and sends back to the author. The author validates the review comments and if valid, updates the document. The updated document is sent to the peer again for review. This cycle is repeated until all comments are implemented. Lead / Manager review: The peer reviewed document is then sent to the lead / manager. The lead / manager reviews the document update comments sent back to the author author validates the comments and implements it in the document - self reviewed peer reviewed then again sent to the lead / manager. This cycle is done still all the lead level comments are implemented in the document. Client review: Client review is not mandatory for all reports. The documents that are a part of delivery like release notes, plans, front end design and release closure reports. After the clients review, the author implements the comments and the document is freezed. Dynamic testing: It is the process of testing the application to meet the clients requirement. Dynamic testing is done in the form of functional and non-functional testing. QA and QC team: This complete process has to be done for every report (either it is macro or micro) generated in the SDLC process. The QA team analyses the complete project and defines the correct quality management process for the project. Tat project is handed over to the QC team. The QC team (QC comprises of all the people in the project life cycle) follows that process to develop the project. Verification and Validation: VERIFICATION is a procedural way of checking done by the QA team on the QC team. The QA team periodically checks the QC team whether they implement the predefined process. This kind of checking is called as INSPECTION. Before starting the inspection, the QA team will send out a notification to the QC team about the part that is going to be inspected. The whole inspection process is triggered, monitored and then closed by a QA manager who will be called as MODERATOR. Based on the part to be inspected, the one who maintains it will be referred as OWNER. The QC team to confirm whether the product is meeting the clients requirements does VALIDATION. Validation is done in the form of static and dynamic testing.


CASE 1: VERIFICATION FAILED VALIDATION The product is developed without following the pre-defined process and also not meeting the clients requirements. The client will not accept this kind of product. CASE 2: VERIFICATION VALIDATION FAIL PASS

The product has been developed without following the pre-defined process but meets the clients requirements. This kind of product can be delivered but may cause issues in a longer run. This may make the client feel unhappy. CASE 3: VERIFICATION VALIDATION PASS FAIL

The product has been developed by following the process but doesnt meet the clients requirements. The client crew does not accept this kind of product. CASE 4: VERIFICATION VALIDATION PASS PASS

The product has been developed by following the process and also meets the clients requirements. This product can be delivered and it will work fine in a longer run. Thus, increasing the clients satisfaction. INSPECTION: Inspection is a process in which QC team is checked whether they follow the pre-defined process by the QA team or not. The QA team does inspection over the QC team. If an


inspection is planned, the QA team will notify the QC team about which part is going to be inspected. The total inspection process is planned, monitored and finally closed by a QA manager. During the inspection session, the manager will be called as a MODERATOR. Based on the part to be inspected, the one who maintains it will be referred as OWNER. Example: If design process is inspected, the owner is architect. If test cases, RTM scenario generation process is inspected, test lead is the owner. If the whole project is inspected, the project manager will take the lead role. One person from QA team is assigned by the moderator to verify the process and that person is called as REVIEWER. The documents / reports that has to be inspected in the process is listed down. The person from the QA team called as READER prepares list called as CHECK LIST which contains the list of documents / reports that has to be inspected in the process. The checklist prepared is thoroughly reviewed and approved by the moderator. Then the list is sent to the owner. On the inspection day, a person called SCRIBER notes down the total comments. PROCEDURE FOR INSPECTION: STEP 1: The moderator finds a process for inspection and based on the process, the owner / reader / scriber is identified. STEP 2: The reader prepares the checklist based on the process. This checklist then thoroughly reviewed and approved by the moderator. The approved checklist is sent to the owner and reviewer. By going through the checklist, the owner gets a clear idea about the items to be reviewed and keeps it all ready and the reviewer understand about the list of items that has to be reviewed. STEP 3: The moderator consults with the owner, reviewer, reader and scriber to fix a schedule for the inspection. Once a date is fixed, the message is passed to all necessary people. STEP 4: On the inspection day, the reader reads out the items one by one from the checklist and the owner shows it to the reviewer. After reviewing, if the item meets the process standards, the scriber will note it as positive else as non-complaints. This process is carried on till all the items in the checklist are verified. STEP 5: After inspection, the scriber consolidates the complete list of non-complaints and sends it to the moderator. The moderator will fix deadlines for all non-complaints and send it to the owner. The owner has to implement all the non-complaints in the necessary process


and report back to the reviewer. The reviewer will verify the process. If pass, a positive reply is sent to the moderator else again intimated to the owner. Once the moderator receives the message, the inspection is closed.

STEP 6: If any process is well implemented or creatively done, it will be noted as BEST PRACTICES. If the owner has any feedbacks/modifications in the particular process, it will be noted as SUGGESSTIONS. If the process has any suggestions, the QA team will validate it and If valid, the process is fine tuned accordingly and handed over back to the owner / QC team. METRICS: Metrics is used for measuring the efficiency of the process defined. PROCEDURE: STEP 1: QA team defines a process. The team also defines how the process has to be measured and what values has to be achieved. STEP 2: The defined process, metrics and the expected results are handed over to the QC team. STEP 3: The QC team implements the process to develop the product. STEP 4: At the end of the release, QC team uses the metric formula and recalculates the actual results. STEP 5: The actual results are sent to the QA team. They compare the expected values with the actual values and check whether the expectations are met or not. STEP 6: If they are met, the process will be finely tuned and metrics expected results would be raised. If not met, the process is re-defined and handed back to the QC team. Example for metrics: 1. Test efficiency (TE) or Defect Removal Efficiency (DRE): Number of bugs identified in the pre-production environment DRE or TE = Total number of bugs


Number of bugs identified in Number of bugs identified in the pre-production the production environment environment Number of test cases written 2. Test case generation efficiency = Total number of bugs = Number of resources

It is a procedural way to maintain the project related records and items safe and securely. The project related records might be consisting of the codes or executables or documents or scripts etc., The QA (Quality Assurance) team defines the configuration management process. It is then handed over to the Configuration Controller (CC) in the QC (Quality Control) team. In the absence of CC, the Back up Configuration Controller (BCC) will take the responsibility. PROCEDURE: Step 1: Identify all the reports (documents, codes and exe files) of the project that has to be maintained in the configuration management process. Step 2: Based on the records to be maintained, the CC identifies the storage space requirement. The request for storage space is submitted to the Manager. If the request is valid, the manager approves it and sends out the request to the networking team. The networking team allocates a specified drive for the configuration management. This drive is called as PROJECT VOLUME (PV). Step 3: The documents are organized in a specific pattern based on the Release Document type Document change frequency and so on. The method of organizing must be comfortable for the fast retrieval of data. Step 4: Folders are created in a tree structure format and they are named appropriately. The documents are added into the specific folder. Step 5: 26

An index document can be prepared using the list of records inside the folders and the path of the folder has to be mentioned in it. It is an easier way to find the documents inside the folder in the project volume.

Step 6: The necessary authentication permissions have to be created for all the users in the project team. The privileges may be Read only. Modify records in PV. Add / delete records in PV. The privileges are given to the users using their designation. Step 7: If a document or record is very important, it has to be maintained as version controlled. To maintain such documents, tools like VSS (Visual Source Safe), CVS (Concurrent Version System) etc., are used. Step 8: Back up is a process of copying the entire data from a project volume to another drive. This can be used while retrieving a document when there is a document corruption or a hardware crash. The back up is done by the LAN (Local Area Network) team everyday midnight. The project volume is scanned periodically for virus threats. The CC has to monitor all these. Step 9: Archival is process of removing the old, unused records from the PV and safeguarding it in a separate drive.

Back up
It is a non-intelligent process. Back up is a process of copying the entire file / folders from the project volume. Here, the process is copy and paste. LAN team will decide this process.

It is an intelligent process. It is a process of archiving less frequently used files. Here it is cut and paste. CC will decide here.

The whole process is documented by the CC and this document is called as CM (Configuration Management) plan.


Testing is the process of executing a program with the intend of finding the errors. It is the process to check whether the customer requirements are met or not. Testing is broadly classified into 2 categories; 1. FUNCTIONAL TESTING 2. NON FUNCTIONAL TESTING FUNCTIONAL TESTING: This type of testing helps in validating whether all the functionalities in the application are working fine or not. This testing concentrates fully on the functionality of the application. The various types of Functional testing are: Smoke testing Functionality testing Regression testing Retesting

SMOKE TESTING: Definition: This Testing is mainly done to validate the readiness of the build. It ensures whether the product is ready for test or not. Entry criteria: Once the developer completes the unit and integration testing in the code level, the build is generated and deployed in the testing environment. Process: Once the build is generated and deployed in the testing environment, the test engineer starts executing the smoke test cases.


If all test cases pass, test engineer carries out with further testing.

Exit criteria: when all the test cases for smoke testing pass. Performed by: Test engineers. Automation: required

FUNCTIONALITY TESTING: Definition: This testing validates the new functionalities or modified existing functionality in the application. Entry criteria: Successful completion of smoke testing. Process: Once the smoke testing is completed successfully, the test engineer executes the functionality test cases on the new/existing functionalities of the application. Exit criteria: Functionality test cases for the new / existing functionalities should pass. Performed by: Test engineers Automation: not required. RETESTING: Definition: This testing validates the bug fixes in the application. Entry criteria: successful completion of smoke and functionality testing. Process: The test engineer understands the bug description and re executes the failed test cases on the new build.

Exit criteria: all test cases related to the specific bug fixes have to be passed. Performed by: Test engineers Automation: Not required. REGRESSION TESTING:


Definition: This testing validates whether code changes for the bug fixes impact on the new/existing functionalities. Entry criteria: Successful completion functionality testing and retesting. Process: The test engineer first executes all the test cases of the bug fixes in the previous build and confirms whether it is pass or not. If pass, the functional testing is re executed from the scratch. Exit criteria: no exit criteria. Performed by: Test engineers. Automation: mandatory. NON-FUNCTIONAL TESTING: This testing concentrates on Parallel transaction. Memory utilization. Queue management System resource sharing etc., Non-functionality testing validates whether the system is able to allocate and de-allocate the resources for multiple transaction done parallely. Non-functionality testing is broadly classified into o o o o o o o LOAD TESTING: Definition: This testing is a process of validating the systems stability till the customer promised load. Entry criteria: Successful completion of functional testing and pre system testing. Load testing Stress testing Soak testing Peak testing Performance testing Volume testing Cluster testing


Process: Perform one transaction check whether the request receives a response. Increase the number of parallel transactions and check whether all requests receive a response. Keep increase the parallel load till the customer specified load. Exit criteria: when the entire request receives a response. Performed by: Test engineers Automation: Required. STRESS TESTING: Definition: This testing is a process of identifying the maximum load or stability value of the system. Entry criteria: successful completion of load testing. Process: Provide a load of parallel CSV request to server and check whether the response is got. Keep increasing the parallel load by a specific factor till the system completely crashes. Exit criteria: when the system has met the maximum load. Performed by: Test engineers. Automation: required. SOAK TESTING: Definition: This is a process of testing the system stability with the medium amount of load for a prolonged time interval. Entry criteria: successful completion of load and stress testing. Process: Identify a load greater than the CSV and less than the stress value. Apply this load on the system continuously for a long period of time. Exit criteria: Till the system is stable within the time interval. Performed by: Test engineers.


Automation: required. PEAK TESTING: Definition: This is a process of identifying the system stability with the sudden increase of load. Entry criteria: On the successful completion of load and stress testing. Process: Define a very steep load graph between CSV and stress value. Apply the steep increase in the load and validate the systems working. Exit criteria: Until the system is stable within the time limit. Performed by: test engineers Automation: required. PERFORMANCE TESTING: Definition: This is a process of validating the system performance factors like Response time. CPU utilization. MEMORY utilization. NETWORK utilization. Resource utilization.

Entry criteria: It is done parallely with load and stress testing. Process: Test the system whether it has achieved the stability within the CSV by defining the customer limits.

Exit criteria: when the system has achieved the stability within the response time. Performed by: Test engineers. Automation: required. VOLUME TESTING: Definition: This is a process of validating the system stability when dealing with high volume of data and data transfer into network. Entry criteria: Successful completion of all types of functional testing. 32

Process: Test the server with huge capacity of data transfer. Exit criteria: Till the system handles a high amount of data transfer. Performed by: Test engineers. Automation: Required.

CLUSTER TESTING: Definition: This testing validates all the components added in the network. Cluster is the group of server that helps the application in Load balancing Back up

When a cluster is present, extra components like gateway servers, load balancers are deployed. Performed by: Test engineers. Automation: required.