Vous êtes sur la page 1sur 23

Some of the interview questions: Ques: What is Scenarios ?

Ans: Scenarios encapsulate the Vuser Groups and scripts to be executed on load generators at runtime. Manual scenarios can distribute the total number of Vusers among scripts based on the analyst-specified percentage, Evenly among load generators. Goal Oriented scenarios are automatically created based on a specified transaction response time or number of hits/transactions-per-second (TPS). Test analysts specify the % of Target among scripts. Ques: 43 How many types of recording facility are available in QuickTest Professional (QTP)? Ans: QTP provides three types of recording methods> Context Recording (Normal) > Analog Recording > Low Level Recording Ques: 44 What's the difference between STATIC TESTING and DYNAMIC TESTING? Ans: > Dynamic testing : Required program to be executed. > Static testing: Does not involve program execution. The program is run on some test cases & results of the programs performance are examined to check whether the program operated as expected E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow analysis, control flow analysis. Ques: How do you debug a LoadRunner script? Ans: VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only. Ques: What is Bug life cycle? Ans: Bug Life Cycle is bassically a defined as : > New: when tester reports a defect. > Open : when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into "Rejected" > Fixed : when developer make changes to the code to rectify the bug. > Closed/Reopen : when tester tests it again. If the expected result shown up, it is turned into "Closed" and if the problem persists again, it's "Reopen". Ques: Verification and validation? Ans: Mainly Definition are there : > Verification : It is static. No code is executed. Say, analysis of requirements etc. > Validation : It is dynamic. Code is executed with scenarios present in test cases. Ques: Advantages of automation over manual testing?

Ans: Automation over manual Testing have a many disadvantages are there : > Time saving, > Resource and > Money Ques: 1 How to Run a Test using QuickTest Professional (QTP)? Ans: Start running our test. Click Run or choose Test > Run. The Run dialog box opens. Select New run results folder. Accept the default results folder name. Click OK to close the Run dialog box. Ques: 2 How to Save your test using QuickTest Professional (QTP)? Ans: Select File > Save or click the Save button. > The Save dialog box opens to the Tests folder. > Create a folder which you want to save to, select it, and click Open. > Type our test name in the File name field. > Confirm that Save Active Screen files is selected. Click Save. > our test name is displayed in the title bar of the main QuickTest window. Ques: 3 How to insert a check point to a image to check enable property in QTP? Ans: AS we are saying that the all images are as push button than we can check the property enabled or disabled. If we are not able to find that property than go to object repository for that objecct and click on add remove to add the available properties to that object. Let me know if that works. And if we take it as image than we need to check visible or invisible property tht also might help you are there are no enable or disable properties for the image object. Ques: 4 How to Start recording using QuickTest Professional (QTP)? Ans: > Record or click the Record button. When the Record and Run Settings dialog box opens to do this ; > In the Web tab, select Open the following browser when a record or run session begins. > In the Windows Applications tab, confirm that Record and run on these applications (opened on session start) is selected, and that there are no applications listed. Ques: 5 How to test the memory leakeage mannually? Ans: Here are tools to check this. Compuware DevPartner can help we test our application for Memory leaks if the application is complex. Also depending upon the OS on which we need to check for memory leaks we need to select the tool. Ques: 6 What's the QuickTest Professional (QTP) testing process? Ans: QTP testing process consist of seven steps* Preparing to recoding * Recording * Enhancing our script * Debugging

* Run * Analyze * Report Defects (more) Ques: 7 How to choose which defect to remove in 1000000 defects? Ans:

How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.) Software QA/Testing Technical FAQs (Continued from previous question...) How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.)

Answe1: Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester? The tester does not choose which defects to fix. The tester helps ensure that the people who do choose, make a well-informed choice. Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization. When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failuretriggering conditions. Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups. we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as -critical -High -Medium -Low -Cosmetic Ques: 8 Is regression testing performed manually? Ans: If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing.

Ques: 9 How to find all the Bugs during first round of Testing? Ans: We understand the problems we are facing. I was involved with a web-based HR system that was encountering the same problems. What I ended up doing was going back over a few release cycles and analyzing the types of defects found and when (in the release cycle including the various testing cycles) they were found. I started to notice a distinct trend in certain areas. For each defect type, I started looking into the possibility if it could have been caught in the prior phase (lots of things were being found in the Systems test phase that should have been caught earlier). If so, why wasn't it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led me to start examining the various processes and found a definite problem with peer reviews (not very thorough IF they were even being done) and with the testing process (not rigorous enough). We worked with the customer and folks doing the testing to start educating them and improving the processes. The result was the number of defects found in the latter test stages (System test for example) were cut by over half! It was getting harder to find problems with the product as they were discovering them earlier in the process -saving time & money! Ques: 10 How to remove a description from the collection ? Ans: obj_ChkDesc.remove html tag would delete the html tag property from the collection. Ques: 11 How do I check if property exists or not in the collection? Ans: The answer is that it's not possible. Because whenever we try to access a property which is not defined its automatically added to the collection. The only way to determine is to check its value that is use a if statement if obj_ChkDesc(html tag).value = empty then. Ques: 12 What is the difference between version and release? Ans: Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the same thing, but there are minor differences between them. 1: Version means a variation of an earlier or original type. For example, you might say, "I've downloaded the latest version of XYZ software from the Internet. The version number of this software is _____" 2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For example, "Microsoft has just released their brand new gaming software known as _______" Ques: 13 Why Testing CANNOT Ensure Quality Ans: Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed. Ques: 14 How to write Test cases for Login screen? Ans: The format for all test cases could be something like this > test cases for GUI > +ve test cases for login.

> -ve test cases for login. in the -ve scenario : we should include boundary analysis to create test cases ,Value Analysis. Equivalence Classes,Positive and Negative test cases) plus cross-site scripting and SQL injection. SQL injection is especially high-risk for login pages. Ques: 15 What is the checklist for credit card testing? Ans: In credit card testing the following validations are considered > Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit card) > The message formats in which the data is sent > LUHN testing > Network response > Terminal validations Ques: 16 How to browse through all the properties of a properties collection? Ans: Two ways are there : 1st : For each desc in obj_ChkDesc Name=desc.Name Value=desc.Value RE = desc.regularexpression Next 2nd : For i=0 to obj_ChkDesc.count - 1 Name= obj_ChkDesc(i).Name Value= obj_ChkDesc(i).Value RE = obj_ChkDesc(i).regularexpression Next Ques: 17 What is software testing methodology? Ans: One software testing methodology is the use a three step process of : > Creating a test strategy; > Creating a test plan/design; and > Executing tests. This methodology can be used and molded to our organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients' applications. Ques: 18 How many types of Parameters are available in QuickTest Professional (QTP)? Ans: QTP provides three types of Parameter> Method Argument > Data Driven > Dynamic Ques: 19 What black box testing types can you tell me about? Ans: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.

> Functional testing is also a black-box type of testing geared to functional requirements of an application. > System testing is also a black box type of testing. > Acceptance testing is also a black box type of testing. > Functional testing is also a black box type of testing. > Closed box testing is also a black box type of testing. > Integration testing is also a black box type of testing. Ques: 20 Why do we perform data integrity testing? Ans: Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time. Ques: 21 How do you test data integrity? Ans: Data integrity is tested by the following tests : > Verify that we can create, modify, and delete any data in tables. > Verify that sets of radio buttons represent fixed sets of values. > Verify that a blank value can be retrieved from the database. > Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur. > Verify that the default values are saved in the database, if the user input is not specified. > Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software. Ques: 22 By giving the description in form of the string arguments. Ans: We can describe an object directly in a statement by specifying property := value pairs describing the object instead of specifying an objects name. The general syntax is : > TestObject("PropertyName1:=PropertyValue1", "..." , "PropertyNameX:=PropertyValueX") TestObject the test object class could be WebEdit, WebRadioGroup etc. > PropertyName := PropertyValue the test object property and its value. Each property := value pair should be separated by commas and quotation marks. Note that we can enter a variable name as the property value if we want to find an object based on property values we retrieve during a run session. Consider the HTML Code given below: <INPUT type=textbox name=txt_Name> <INPUT type=radio name=txt_Name> Now to refer to the textbox the statement would be as given below Browser(Browser).Page(Page).WebEdit(Name:=txt_Name,html tag:=INPUT).set Test And to refer to the radio button the statement would be as given below

Browser(Browser).Page(Page).WebRadioGroup(Name:=txt_Name,html tag:=INPUT).set Test If we refer to them as a web element then we will have to distinguish between the 2 using the index property Browser(Browser).Page(Page).WebElement(Name:=txt_Name,html tag:=INPUT,Index:=0).set Test Refers to the textbox Browser(Browser).Page(Page).WebElement(Name:=txt_Name,html tag:=INPUT,Index:=1).set Test Refers to the radio button Ques: 23 What is the difference between data validity and data integrity? Ans: Many Diff are there : 1 : Data validity is about the correctness and reasonableness of data, while data integrity is about the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data. 2 : Data validity errors are more common, and data integrity errors are less common. 3 : Errors in data validity are caused by human beings - usually data entry personnel - who enter, for example, 13/25/2010, by mistake, while errors in data integrity are caused by bugs in computer programs that, for example, cause the overwriting of some of the data in the database, when somebody attempts to retrieve a blank value from the database. Ques: 24 What is the difference between static and dynamic testing? Ans: Many Diff are there : 1: Static testing is about prevention, dynamic testing is about cure. 2: The static tools offer greater marginal benefits. 3: Static testing is many times more cost-effective than dynamic testing. 4: Static testing beats dynamic testing by a wide margin. 5: Static testing is more effective! 6: Static testing gives you comprehensive diagnostics for your code. 7: Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed. 8: Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation. 9: Dynamic testing finds fewer bugs than static testing. 10: Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking. 11: Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations. Ques: 25 How can I save the changes to my DataTable in the test itself? Ans: Well QTP does not allow anything for saving the run time changes to the actual data sheet. The only work around is to share the spreadsheet and then access it using the Excel COM Api's. Ques: 26 Give us a QuickTest Professional (QTP) 8.2 Tips and Tricks (1) ?

Ans: Data Table : Two Types of data tables : > Global data sheet : Accessible to all the actions > Local data sheet : Accessible to the associated action only. Usage : DataTable("Column Name",dtGlobalSheet) for Global data sheet DataTable("Column Name",dtLocalSheet) for Local data sheet If we change any thing in the Data Table at Run-Time the data is changed only in the run-time data table. The run-time data table is accessible only through then test result. The run-time data table can also be exported using DataTable.Export or DataTable.ExportSheet Ques: 27 What testing tools should we use? Ans: We should use both static and dynamic testing tools. To maximize software reliability, we should use both static and dynamic techniques, supported by appropriate static and dynamic testing tools. 1: Static and dynamic testing are complementary. Static and dynamic testing find different classes of bugs. Some bugs are detectable only by static testing, some only by dynamic. 2: Dynamic testing does detect some errors that static testing misses. To eliminate as many errors as possible, both static and dynamic testing should be used. 3: All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing for code that is hard to test, testing for code that does not conform to coding standards, and testing for ANSI violations) takes place before compilation. 4: Static testing takes roughly as long as compilation and checks every statement we have written. Ques: 28 What's difference between QA/testing ? Ans: The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans." The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built . Ques: 29 How can I make some rows colored in the data table? Ans: Well you can't do it normally but you can use Excel COM API's do the same. Below code will explain some expects of Excel COM APIs. code : Set xlApp=Createobject("Excel.Application") set xlWorkBook=xlApp.workbooks.add set xlWorkSheet=xlWorkbook.worksheet.add xlWorkSheet.Range("A1:B10").interior.colorindex = 34 'Change the color of the cells xlWorkSheet.Range("A1:A10").value="text" 'Will set values of all 10 rows to "text" xlWorkSheet.Cells(1,1).value="Text" 'Will set the value of first row and first col rowsCount=xlWorkSheet.Evaluate("COUNTA(A:A)") 'Will count the # of rows which have non blank value in the column A

colsCount=xlWorkSheet.Evaluate("COUNTA(1:1)") 'Will count the # of non blank columns in 1st row xlWorkbook.SaveAs "C:\Test.xls" xlWorkBook.Close Set xlWorkSheet=Nothing Set xlWorkBook=Nothing set xlApp=Nothing Ques: 30 When should I use SMART Identification? Ans: SMART Identification : Smart Identification is nothing but an algorithm used by QTP when it is not able to recognize one of the object. A very generic example as per the QTP manual would be, A photograph of a 8 year old girl and boy and QTP records identification properties of that girl when she was 8, now when both are 10 years old then QTP would not be able to recognize the girl. But there is something that is still the same, that is there is only one girl in the photograph. So it kind of PI (Programmed intelligence) not AI. Ques: 31 Define : Software Testing? Ans: Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure. Ques: 32 Why should I use static testing techniques? Ans: There are several reasons why one should use static testing techniques. 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing. 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of 4. 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than detecting bugs by dynamic testing. 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing. 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.

6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool supported static testing should never be omitted. Ques: 33 Define : Descriptive Programming ? Ans: Descriptive programming is nothing but a technique using which operations can be performed on the AUT object which are not present in the Ques: 34 What we normally check for in the Database Testing? Ans: In DB testing we need to check for, > The field size validation > Check constraints. > Indexes are done or not (for performance related issues) > Stored procedures > The field size defined in the application is matching with that in the db. Ques: 35 What is the definiton of top down design? Ans: Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve subproblems. Top down design creates solutions to these smaller problems, and then tests them using test drivers. In other words, top down design starts the design process with the main module or system, then progresses down to lower level modules and subsystems. To put it differently, top down design looks at the whole system, and then explodes it into subsystems, or smaller parts. A systems engineer or systems analyst determines what the top level objectives are, and how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them individually. Ques: 36 What is a Recovery Scenario? Ans: Recovery scenario gives us an option to take some action for recovering from a fatal error in the test. The error could range in from occasional to typical errors. Occasional error would be like "Out of paper" popup error while printing something and typical errors would be like "object is disabled" or "object not found". A test case have more then one scenario associated with it and also have the priority or order in which it should be checked. Ques: 37 What does a Recovery Scenario consists of? Ans: > Trigger : Trigger is nothing but the cause for initiating the recovery scenario. It could be any popup window, any test error, particular state of an object or any application error. > Action : Action defines what needs to be done if scenario has been triggered. It can consist of a mouse/keyboard event, close application, call a recovery function defined in library file or restart windows. We can have a series of all the specified actions. > Post-recovery operation : Basically defined what need to be done after the recovery action has been taken. It could be to repeat the step, move to next step etc. Ques: 38 What is the future of software QA/testing? Ans: In software QA/testing, employers increasingly want us to have a combination of technical, business, and personal skills. By technical skills they mean skills in IT, quantitative analysis, data modeling, and technical writing. By business skills they mean skills in strategy and business

writing. By personal skills they mean personal communication, leadership, teamwork, and problem-solving skills. We, employees, on the other hand, want increasingly more autonomy, better lifestyle, increasingly more employee oriented company culture, and better geographic location. We continue to enjoy relatively good job security and, depending on the business cycle, many job opportunities. We realize our skills are important, and have strong incentives to upgrade our skills, although sometimes lack the information on how to do so. Educational institutions increasingly ensure that we are exposed to real-life situations and problems, but high turnover rates and a rapid pace of change in the IT industry often act as strong disincentives for employers to invest in our skills, especially non-company specific skills. Employers continue to establish closer links with educational institutions, both through in-house education programs and human resources. The share of IT workers with IT degrees keeps increasing. Certification continues to keep helping employers to quickly identify us with the latest skills. During boom times, smaller and younger companies continue to be the most attractive to us, especially those that offer stock options and performance bonuses in order to retain and attract those of us who are the most skilled. High turnover rates continue to be the norm, especially during economic boom. Software QA/testing continues to be outsourced to offshore locations. Software QA/testing continues to be performed by mostly men, but the share of women keeps increasing. Ques: 39 What is Database testing? Ans: Data bas testing basically include the following. > Data validity testing. > Data Integritity testing > Performance related to data base. > Testing of Procedure,triggers and functions. for doing data validity testing we should be good in SQL queries. For data integrity testing we should know about referintial integrity and different constraint. For performance related things we should have idea about the table structure and design. for testing Procedure triggers and functions we should be able to understand the same. Ques: 40 When to use a Recovery Scenario and when to us on error resume next? Ans: Recovery scenarios are used when we cannot predict at what step the error can occur or when we know that error won't occur in our. QTP script but could occur in the world outside QTP, again the example would be "out of paper", as this error is caused by printer device driver. "On error resume next" should be used when we know if an error is expected and dont want to raise it, we may want to have different actions depending upon the error that occurred. Use err.number & err.description to get more details about the error. Ques: 49 Whats the basic concept of QuickTest Professional (QTP)? Ans: QTP is based on two concept> Recording > Playback Ques: 53 What testing approaches can you tell me about? Ans: Each of the followings represents a different testing approach : > Black box testing,

> White box testing, > Unit testing, > Incremental testing, > Integration testing, > Functional testing, > System testing, > End-to-End testing, > Sanity testing, > Regression testing, > Acceptance testing, > load testing, > Performance testing, > Usability testing, > Install/uninstall testing, > Recovery testing, > Security testing, > Compatibility testing, > Exploratory testing, > Ad-hoc testing, > User acceptance testing, > Comparison testing, > Alpha testing, > Beta testing, and > Mutation testing. Ques: 183 What's the difference between black box and white box testing? Ans: Black box and white-box are test design methods. There are many diff are there : > Black-box test design treats the system as a black-box, so it doesn't explicitly use knowledge of the internal structure. > Black-box test design is usually described as focusing on testing functional requirements. > Synonyms for black-box include : behavioral, functional, opaque-box, and closed-box. Whitebox test design allows one to peek inside the box, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include : structural, glass-box and clear-box. > While black-box and white-box are terms that are still in popular use, many people prefer the terms 'behavioral' and 'structural'. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, It hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this 'gray-box' or 'translucent-box' test design, but others wish we'd stop talking about boxes altogether. > It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing unit testing, system testing, etc. can use any test design methods. > Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.

Ques: 184 What kinds of testing should be considered? Ans: Many Testings are there : > Black box testing : not based on any knowledge of internal design or code. Tests are based on requirements and functionality. > White box testing : based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. > Unit testing : The Most 'Micro' scale of testing, to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a welldesigned architecture with tight code; may require developing test driver modules or test harnesses. > Incremental Integration Testing : continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. > Integration Testing : Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. > Functional Testing : Black-box type testing geared to functional requirements of an application, this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it,which of course applies to any stage of testing. > System Testing : Black-Box type testing that is based on overall requirements specifications; covers all combined parts of a system . > End-to-End Testing : Similar to system testing, the 'Macro' end of the test scale, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. > Sanity Testing or Smoke Testing : Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. > Regression Testing : Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. > Acceptance Testing : final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. > Load Testing : testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. > Stress Testing : term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

> Performance Testing : Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing and any other 'type' of testing is defined in requirements documentation or QA or Test Plans. > Usability Testing : Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. > Install/Uninstall Testing : Testing of full, partial, or upgrade install/uninstall processes. > Recovery Testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. > Failover Testing : Typically used interchangeably with 'Recovery Testing' > Security Testing : Testing how well the system protects against unauthorized internal or external access, willful damage, may require sophisticated testing techniques. > Compatibility Testing : Testing how well software performs in a particular hardware/software/operating system/network/etc. environment. > Exploratory Testing : often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. > Ad-hoc Testing : similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. > Context-driven Testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for Life-Critical medical equipment software would be completely different than that for a low-cost computer game. > User Acceptance Testing : Determining if software is satisfactory to an end-user or customer. > Comparison Testing : Comparing software weaknesses and strengths to competing products. > Alpha Testing : testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. > Beta Testing : testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. > Mutation Testing : A Method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. Ques: 186 Why does software have bugs? Ans: Many reasons are there : > Miscommunication or no communication : as to specifics of what an application should or shouldn't do the application's requirements. > Software complexity : the complexity of current software applications can be difficult to comprehend for anyone without experience in modern day software development. Multi tiered applications, client server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. programming errors programmers, like anyone else, can make mistakes.

> Changing requirements (whether documented or undocumented) : the end user may not understand the effects of changes, or may understand and request them anyway redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control. > Poorly documented code : it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read'). > software development tools : visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs. Ques: 187 What are the tables in testplans and testcases? Ans: Test plan is bassiclaly a document that contains the scope, approach, test design and test strategies. It includes the following : > Test case identifier > Scope > Features to be tested > Features not to be tested. > Test strategy. > Test Approach > Test Deliverables > Responsibilities. > Staffing and Training > Risk and Contingencies > Approval While A test case is a noted/documented set of steps or activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs. Ques: 188 What are the table contents in testplans and test cases? Ans: Test Plan is bassically a document which is prepared with the details of the testing priority. A test Plan generally includes : > Objective of Testing > Scope of Testing > Reason for testing > Timeframe > Environment > Entrance and exit criteria > Risk factors involved

> Deliverables Ques: 189 What automating testing tools are you familiar with? Ans: Many Testing tolls are there : > Win Runner , > Load runner, > QTP , > Silk Performer, > Test director, > Rational robot, > QA run. Ques: 190 How did you use automating testing tools in your job? Ans: Many ways are there : > For regression testing > Criteria to decide the condition of a particular build > Describe some problem that you had with automating testing tool. The problem of winrunner identifying the third party controls like infragistics control. Ques: 191 How do you plan test automation? Ans: That are defines as : > Prepare the automation Test plan > Identify the scenario > Record the scenario > Enhance the scripts by inserting check points and Conditional Loops > Incorporated Error Handler > Debug the script > Fix the issue > Rerun the script and report the result. Ques: 192 Can test automation improve test effectiveness? Ans: Yes, Automating a test makes the test process : > Fast > Reliable > Repeatable > Programmable > Reusable > Comprehensive > What is data driven automation? Testing the functionality with more test cases becomes laborious as the functionality grows. For multiple sets of data or test cases, we can execute the test once in which you can figure out for which data it has failed and for which data, the test has passed. This feature is available in the WinRunner with the data driven test where the data can be taken from an excel sheet or notepad. Ques: 193 What are the main attributes of test automation?

Ans: There are many software test automation attributes : > Maintainability : the effort needed to update the test automation suites for each new release. > Reliability : the accuracy and repeatability of the test automation. > Flexibility : the ease of working with all the different kinds of automation test ware. > Efficiency : the total cost related to the effort needed for the automation. > Portability : the ability of the automated test to run on different environments. > Robustness : the effectiveness of automation on an unstable or rapidly changing system. > Usability : the extent to which automation can be used by different types of users. Ques: 194 Does automation replace manual testing? Ans: That is a some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be replaced .When we talk about real environment we do negative testing manually. Ques: 195 How will you choose a tool for test automation? Ans: Choosing of a tool defends on many things. > Application to be tested. > Test environment. > Scope and limitation of the tool. > Feature of the tool. > Cost of the tool. > Whether the tool is compatible with your application which means tool should be able to interact with our application. > Ease of use . Ques: 196 How you will evaluate the tool for test automation? Ans: We need to concentrate on the features of the tools and how this could be benficial for our project. The additional new features and the enhancements of the features will also help. Ques: 197 What are main benefits of test automation? Ans: Main Benefits are there : > FAST , > RELIABLE, > COMPREHENSIVE, > REUSABLE . Ques: 198 What could go wrong with test automation? Ans: Automation Testing is bassically a go wrong in this way , which is following as : > The choice of automation tool for certain technologies . > Wrong set of test automated . Ques: 199 How you will describe testing activities? Ans: Testing activities start from the elaboration phase. The various testing activities are : > Preparing the test plan, > Preparing test cases,

> Execute the test case, > Log the bug, > Validate the bug and take appropriate action for the bug, Automate the test cases. Ques: 200 What testing activities you may want to automate? Ans: Automate all the high priority test cases which needs to be executed as a part of regression testing for each build cycle. Ques: 101 What is Monkey Testing? Ans: Monkey Testing is bassically a Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out. Ques: 102 What is Negative Testing? Ans: Negative Testing is bassically a Testing aimed at showing software does not work. Also known as "test to fail". Ques: 103 What is Path Testing? Ans: Path Testing is bassically a Testing in which all paths in the program source code are tested at least once . Ques: 104 What is Performance Testing? Ans: Performance Testing is bassically a Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. In the performance Testing some points are very important for the testers : > Responce Time > Band Width > Throughput > Scalability > Stability. Performance have a three types of there : > Load Testing > Stress Testing > volume Testing Ques: 105 What is Positive Testing? Ans: Positive Testing is bassically use for showing software works. that which s/w working well or not ? Ques: 106 What is Quality Assurance? Ans: Quality Assurance is bassically All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. Ques: 107 What is Quality Audit? Ans: Quality audit is bassically a systematic and independent examination to determine whether

quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives. Ques: 108 What is Quality Circle? Ans: Quality Circle is bassically a group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality. Ques: 109 What is Quality Control? Ans: Quality control is bassically a operational techniques and the activities used to fulfill and verify requirements of quality. Ques: 110 What is Quality Management? Ans: Quality Manegement is bassically That aspect of the overall management function that determines and implements the quality policy. Ques: 111 What is Quality Policy? Ans: Quality Policy is bassically a Overall intentions and direction of an organization as regards quality as formally expressed by top management. Ques: 112 What is Quality System? Ans: Quality System is bassically a organizational structure, responsibilities, procedures, processes, and resources for implementing quality management. Ques: 113 What is Race Condition? Ans: Race Condition is a bassically a cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access. Ques: 114 What is Ramp Testing? Ans: Ramp testing is bassically a Continuously raising an input signal until the system breaks down. Ques: 115 What is Recovery Testing? Ans: Recovery Tesing is bassiclaly confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. Ques: 116 What is Regression Testing? Ans: Regression testing is bassiclaly a retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Ques: 117 What is Release Candidate? Ans: Release Candidate is bassiclaly a pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs , which ideally should be removed before the final version is released. Ques: 118 What is Sanity Testing?

Ans: Sanity Testing is bassially a Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing. Ques: 119 What is Scalability Testing? Ans: Scalability is bassically a performance testing focused on ensuring the application under test gracefully handles increases in work load. Ques: 120 What is Security Testing? Ans: Security Testing is that testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. Ques: 89 What is Functional Testing? Ans: Fuctional Testing is bassically a Testing, Which have a many features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing. Ques: 90 What is Glass Box Testing? Ans: Glass Box Testing is the same as White Box Testing. Its Testing is a critical activity for database application programs as faults if undetected could lead to unrecoverable data loss. Database application programs typically contain statements written in an imperative programming language with embedded data manipulation commands, such as SQL. However relatively little study has been made in the testing of database application programs. In particular, few testing techniques explicitly consider the inclusion of database instances in the selection of test cases and the generation of test data input. In this paper, we study the generation of database instances that respect the semantics of SQL statements embedded in a database application program. The paper also describes a supporting tool which generates a set of constraints. These constraints collectively represent a property against which the program is tested. Database instances for program testing can be derived by solving the set of constraints using existing constraint solvers . Ques: 91 What is Gorilla Testing? Ans: Gorilla Teting is bassically a type of testing which is one particular module, functionality heavily. Ques: 92 What is Gray Box Testing? Ans: A Combination of Black Box and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings. Test case generation is the most important part of the testing efforts, the automation of specification based test case generation needs formal or semi-formal specifications. As a semi-formal modelling language, UML is widely used to describe analysis and design specifications by both academia and industry, thus UML models become the sources of test generation naturally. Test cases are usually generated from the requirement or the code while the design is seldom concerned, this paper proposes an approach to generate test cases directly from UML activity diagram using Gray-box method, where the design is reused to avoid the cost of test model creation. In this

approach, test scenarios are directly derived from the activity diagram modelling an operation. Then all the information for test case generation, i.e. input/output sequence and parameters, the constraint conditions and expected object method sequence, is extracted from each test scenario. At last, the possible values of all the input/output parameters could be generated by applying category-partition method, and test suite could be systematically generated to find the inconsistency between the implementation and the design. A prototype tool named UMLTGF has been developed to support the above process. Ques: 97 What is Installation Testing? Ans: Instalation testing is basscially a Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.A method for installing and/or testing software for a build to order computer system includes reading a plurality of component descriptors from a computer readable file. At least one component descriptor describes a respective component of the computer system. A plurality of steps are retrieved from a database, at least one step being associated with a respective component descriptor. A step also includes a respective sequence number. The plurality of steps are sequenced in a predetermined order according to the sequence numbers to provide a step sequence. The step sequence includes commands for installing and/or testing software upon the computer system. Ques: 98 What is Localization Testing? Ans: Localisation testing is the basscially that term which refers to making software specifically designed for a specific locality. Ques: 99 What is Loop Testing? Ans: Loop testing is bassically a white box testing technique that exercises program loops Ques: 399 What are the different modes of recording? Ans: There are two type of recording in WinRunner. > Context Sensitive recording records the operations we perform on our application by identifying Graphical User Interface (GUI) objects. > Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen. Ques: 400 What is the purpose of loading WinRunner Add-Ins? Ans: Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the addin selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. Ques: 401 What are the reasons that WinRunner fails to identify an object on the GUI? Ans: WinRunner fails to identify an object in a GUI due to various reasons. > The object is not a standard windows object. > If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window. Ques: 402 What do you mean by the logical name of the object?????

Ans: An Objects logical name is determined by its class. In most cases, the logical name is the label that appears on an object. Ques: 282 What is LoadRunner? Ans: LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test, These load generator agents are started and stopped by Mercury's Controller program. The Controller controls load test runs based on Scenarios invoking compiled Scripts and associated Run-time Settings. Scripts are crafted using Mercury's "Virtual user script Generator", named "V U Gen", It generates C language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers. With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller. At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser. Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis. Errors during each run are stored in a database file which can be read by Microsoft Access. Ques: 287 What are the components of LoadRunner? Ans: The Components of LoadRunner are > The Virtual User Generator, > Controller, and > The Agent process, > LoadRunner Analysis and > Monitoring, > LoadRunner Books Online. Ques: 240 Give an example of high priority and low severity, low priority and high severity? Ans: Severity level : The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact. Severity is levels : * Critical : the software will not run * High : unexpected fatal errors (includes crashes and data corruption) * Medium : a feature is malfunctioning * Low : a cosmetic issue > Severity levels : * Bug causes system crash or data loss. * Bug causes major functionality or other severe problems; product crashes in obscure cases.

* Bug causes minor functionality problems, may affect "fit anf finish". * Bug contains typos, unclear wording or error messages in low visibility fields. > Severity levels : * High : A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue. * Medium : A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue. * Low : A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption. > Severity and Priority : * Priority is Relative : the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. Its relative. It shifts over time. And its a business decision. * Severity is an absolute : its an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re evaluate our assessment. If it was a high severity issue when I entered it, its still a high severity issue when its deferred to the next release. The severity hasnt changed just because weve run out of time. The priority changed. > Severity Levels can be defined as follow : * S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window. Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes. > S2 : Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that : a) Affects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or is intermittent > S3 : Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors inlayout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

Vous aimerez peut-être aussi