Vous êtes sur la page 1sur 49

Learning basics of QTP automation tool and preparation of QTP interview questions

Posted In | Automation Testing, QTP, Questions & answers, Testing Interview questions This post is in continuation with QTP interview questions series. Following questions will help for preparing interview as well as learning the QTP basics. Quick Test Professional: Interview Questions and answers. 1. What are the features and benefits of Quick Test Pro(QTP)? 1. Key word driven testing 2. Suitable for both client server and web based application 3. VB script as the script language 4. Better error handling mechanism 5. Excellent data driven testing features 2. How to handle the exceptions using recovery scenario manager in QTP? You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run 3. What is the use of Text output value in QTP? Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table. 4. How to use the Object spy in QTP 8.0 version? There are two ways to Spy the objects in QTP 1) Thru file toolbar: In the File ToolBar click on the last toolbar button (an icon showing a person with hat). 2) Thru Object repository Dialog: In Objectrepository dialog click on the button object spy In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of

the object. If at all the object is not visible or window is minimized then hold the Ctrl button and activate the required window to and release the Ctrl button. 5. What is the file extension of the code file and object repository file in QTP? File extension of Per test object rep: filename.mtr Shared Object rep: filename.tsr Code file extension id: script.mts 6. Explain the concept of object repository and how QTP recognizes objects? Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected). we can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordianl identifier such as objects location on the page or in the source code. 7. What are the properties you would use for identifying a browser and page when using descriptive programming? name would be another property apart from title that we can use. OR We can also use the property micClass. ex: Browser(micClass:=browser).page(micClass:=page) 8. What are the different scripting languages you could use when working with QTP? You can write scripts using following languages: Visual Basic (VB), XML, JavaScript, Java, HTML 9. Tell some commonly used Excel VBA functions. Common functions are: Coloring the cell, Auto fit cell, setting navigation from link in one cell to other saving 10. Explain the keyword createobject with an example. Creates and returns a reference to an Automation object syntax: CreateObject(servername.typename [, location]) Arguments servername:Required. The name of the application providing the object. typename : Required. The type or class of the object to create. location : Optional. The name of the network server where the object is to be created.

11. Explain in brief about the QTP Automation Object Model. Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program. 12. How to handle dynamic objects in QTP? QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognize the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time. Check out this: If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object. While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails. The Smart Identification mechanism uses two types of properties: Base filter properties The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web links tag was changed from to any other value, you could no longer call it the same object. Optional filter properties Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable. 13. What is a Run-Time Data Table? Where can I find and view this table? In QTP, there is data table used, which is used at runtime. -In QTP, select the option View->Data table. -This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default. 14. How does Parameterization and Data-Driving relate to each other in QTP?

To data driven we have to parameterize. i.e. we have to make the constant value as parameter, so that in each interaction(cycle) it takes a value that is supplied in run-time data table. Through parameterization only we can drive a transaction (action) with different sets of data. You know running the script with the same set of data several times is not suggested, and its also of no use. 15. What is the difference between Call to Action and Copy Action.? Call to Action: The changes made in Call to Action, will be reflected in the original action (from where the script is called). But where as in Copy Action , the changes made in the script ,will not effect the original script(Action) 16. Explain the concept of how QTP identifies object. During recording qtp looks at the object and stores it as test object. For each test object QT learns a set of default properties called mandatory properties, and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run, QTP searches for the run time objects that matches with the test object it learned while recording. 17. Differentiate the two Object Repository Types of QTP. Object repository is used to store all the objects in the application being tested. Types of object repository: Per action and shared repository. In shared repository only one centralized repository for all the tests. where as in per action for each test a separate per action repository is created. 18. What the differences are and best practical application of Object Repository? Per Action: For Each Action, one Object Repository is created. Shared: One Object Repository is used by entire application 19. Explain what the difference between Shared Repository and Per Action Repository Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner Per Action: For each Action, one Object Repository is created, like GUI map file per test in WinRunner 20. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote. Sample answer (You can tell about modules you worked on. If your answer is Yes then You should expect more questions and should be able to explain those modules in later

questions): I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages. 21. Can you do more than just capture and playback? Sample answer (Say Yes only if you worked on): I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL. -It was done by the windows scripting using the DOM(Document Object Model) of the windows. 22. How to do the scripting. Are there any inbuilt functions in QTP? What is the difference between them? How to handle script issues? Yes, theres an in-built functionality called Step Generator in Insert->Step->Step Generator -F7, which will generate the scripts as you enter the appropriate steps. 23. What is the difference between check point and output value? An output value is a value captured during the test run and entered in the run-time but to a specified location. EX:-Location in Data Table[Global sheet / local sheet] 24. How many types of Actions are there in QTP? There are three kinds of actions: Non-reusable action An action that can be called only in the test with which it is stored, and can be called only once. Reusable action An action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests. External action A reusable action stored with another test. External actions are readonly in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action. 25. I want to open a Notepad window without recording a test and I do not want to use System utility Run command as well. How do I do this? You can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad ( i.e. where the notepad.exe is stored in the system) in the Windows Applications Tab of the Record and Run Settings window.

Database connections

Database connections
Database connection is a facility in computer science that allows client software to communicate with database server software, whether on the same machine or not. SQL Server connection strings SQL ODBC connection strings Standard Security: "Driver={SQLServer};Server=Your_Server_Name;Database=Your_Database_Name;Uid =Your_Username;Pwd=Your_Password;" Trusted connection: "Driver={SQLServer};Server=Your_Server_Name;Database=Your_Database_Name;Tru sted_Connection=yes;" SQL OLE DB connection strings Standard Security: "Provider=SQLOLEDB;Data Source=Your_Server_Name;Initial Catalog= Your_Database_Name;UserId=Your_Username;Password=Your_Password;" Trusted connection: "Provider=SQLOLEDB;Data Source=Your_Server_Name;Initial Catalog=Your_Database_Name;Integrated Security=SSPI;" SQL OleDbConnection .NET strings Standard Security: "Provider=SQLOLEDB;Data Source=Your_Server_Name;Initial Catalog= Your_Database_Name;UserId=Your_Username;Password=Your_Password;" Trusted connection: "Provider=SQLOLEDB;Data Source=Your_Server_Name;Initial Catalog=Your_Database_Name;Integrated Security=SSPI;" SQL SqlConnection .NET strings Standard Security:

1. "Data Source=Your_Server_Name;Initial Catalog= Your_Database_Name;UserId=Your_Username;Password=Your_Password;" < br>2. "Server=Your_Server_Name;Database=Your_Database_Name;UserID=Your_Username; Password=Your_Password;Trusted_Connection=False" Trusted connection: 1. "Data Source=Your_Server_Name;Initial Catalog=Your_Database_Name;Integrated Security=SSPI;" 2."Server=Your_Server_Name;Database=Your_Database_Name;Trusted_Connection=Tr ue;" MS Access connection strings MS Access ODBC connection strings Standard Security: "Driver= {MicrosoftAccessDriver(*.mdb)};DBQ=C:\App1\Your_Database_Name.mdb;Uid=Your _Username;Pwd=Your_Password;" Workgroup: "Driver={Microsoft Access Driver (*.mdb)}; Dbq=C:\App1\Your_Database_Name.mdb; SystemDB=C:\App1\Your_Database_Name.mdw;" Exclusive "Driver={Microsoft Access Driver (*.mdb)}; DBQ=C:\App1\Your_Database_Name.mdb; Exclusive=1; Uid=Your_Username; Pwd=Your_Password;" MS Access OLE DB & OleDbConnection (.NET framework) connection strings Open connection to Access database: "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=c:\App1\Your_Database_Name.mdb; User Id=admin; Password=" Open connection to Access database using Workgroup (System database): "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=c:\App1\Your_Database_Name.mdb; Jet OLEDB:System Database=c:\App1\Your_System_Database_Name.mdw" Open connection to password protected Access database: "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=c:\App1\Your_Database_Name.mdb; Jet OLEDB:Database Password=Your_Password" Open connection to Access database located on a network share: "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=\\Server_Name\Share_Name\Share_Path\Your_Database_Name.mdb" Open connection to Access database located on a remote server: "Provider=MS Remote; Remote Server=http://Your-Remote-Server-IP; Remote Provider=Microsoft.Jet.OLEDB.4.0; Data Source=c:\App1\Your_Database_Name.mdb"

MySQL connection strings MySQL ODBC connection strings Open connection to local MySQL database using MySQL ODBC 3.51 Driver "Provider=MSDASQL; DRIVER={MySQL ODBC 3.51Driver}; SERVER= localhost; DATABASE=Your_MySQL_Database; UID= Your_Username; PASSWORD=Your_Password; OPTION=3" MySQL OLE DB & OleDbConnection (.NET framework) connection strings Open connection to MySQL database: "Provider=MySQLProv;Data Source=Your_MySQL_Database;User Id=Your_Username; Password=Your_Password;" Oracle connection strings Oracle ODBC connection strings Open connection to Oracle database using ODBC "Driver= {Microsoft ODBCforOracle};Server=Your_Oracle_Server.world;Uid=Your_Username;Pwd=Your_ Password;" Oracle OLE DB & OleDbConnection (.NET framework) connection strings Open connection to Oracle database with standard security: 1. "Provider=MSDAORA;Data Source= Your_Oracle_Database;UserId=Your_Username;Password=Your_Password;" 2. "Provider= OraOLEDB.Oracle;Your_Oracle_Database;UserId=Your_Username;Password=Your_Pa ssword;" Open trusted connection to Oracle database "Provider= OraOLEDB.Oracle;DataSource=Your_Oracle_Database;OSAuthent=1;"

Dynamic Handling of Object Repositories

Dynamic handling of object Repositories


Loading repositories during running, finding path of the repositories and removing repositories is called Dynamic Handling of Object Repositories. Using this feature we can increase QTP performance. To do this, QTP is providing an object called RepositoriesCollection. Syntax for Loading a Repository: RepositoriesCollection.Add Path of the Repository File Syntax for finding Position of the Repository: Variable=RepositoriesCollection.Find(Path of the Repository) Syntax for Removing the Repository: RepositoriesCollection.Remove(position) Syntax for Removing All Repositories: RepositoriesCollection.RemoveAll Example: RepPath="C:\Documents and Settings\Administrator\My Documents\Login.tsr" RepositoriesCollection.Add (RepPath) systemutil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe" Dialog("Login").Activate Dialog("Login").WinEdit("Agent Name:").Set "sudhakar" Dialog("Login").WinEdit("Password:").Set "mercury" Dialog("Login").WinButton("OK").Click pos=RepositoriesCollection.Find(RepPath) RepositoriesCollection.Remove(pos)

RepositoriesCollection.RemoveAll *********************************** Go to home page for QTP Guide, Script examples, Interview questions and Framework etc.

1.

What makes a good QA or test manager?

A good QA, test, or QA/Test(combined) manager should: 1. be familiar with the software development process 2. be able to maintain enthusiasm of their team and promote a positive atmosphere, despite 3. what is a somewhat 'negative' process (e.g., looking for or preventing problems) 4. be able to promote teamwork to increase productivity 5. be able to promote cooperation between software, test, and QA engineers 6. have the diplomatic skills needed to promote improvements in QA processes 7. have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to 8. have people judgement skills for hiring and keeping skilled personnel 8. be able to communicate with technical and non-technical people, engineers, managers, and customers. 9. be able to run meetings and keep them focused 2. What if there is not enough time for thorough testing?

He should be able to identify the test strategy based on the scope of the release to minimise the stress on the team.

Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. Considerations can include: ? Which functionality is most important to the project's intended purpose. ? Which functionality is most visible to the user. ? Which functionality has the largest safety impact. ? Which functionality has the largest financial impact on users. ? Which aspects of the application are most important to the customer. ? Which aspects of the application can be tested early in the development cycle. ? Which parts of the code are most complex, and thus most subject to errors.

? Which parts of the application were developed in rush or panic mode. ? Which aspects of similar/related previous projects caused problems. ? Which aspects of similar/related previous projects had large maintenance expenses. ? Which parts of the requirements and design are unclear or poorly thought out. ? What do the developers think are the highest-risk aspects of the application. ? What kinds of problems would cause the worst publicity. ? What kinds of problems would cause the most customer service complaints. ? What kinds of tests could easily cover multiple functionalities. ? Which tests will have the best high-risk-coverage to time-required ratio.

Test scenarios with high severity needs to be tested with priority

3. What if an organization is growing so fast that fixed QA processes are impossible This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than: ? Hire good people ? Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer ? Everyone in the organization should be clear on what 'quality' means to the customer

When time is a crunch, Project team might opt for Agile Testing. In Agile Testing, testers are a part of requirement/development discussions and have a good amount of knowledge about system under test. One of the major characteristics of Agile testing is small and frequent releases, and good amount of regression on already deployed part. Daily scrum meetings are a key factor, and development and testing go hand in hand with very little documentation. If the team encounters a defect, development and business team is immediately informed and everyone works on it together.

What if the project is not big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed. The tester might then do adhoc testing, or write up a limited test plan based on the risk analysis.

Understanding the customer requirement is the key here.....

5. What should be done after a bug is found.

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process: ? Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary. ? Bug identifier (number, ID, etc.) ? Current bug status (e.g., 'Released for Retest', 'New', etc.) ? The application name or identifier and version ? The function, module, feature, object, screen, etc. where the bug occurred ? Environment specifics, system, platform, relevant hardware specifics ? Test case name/number/identifier ? One-line bug description ? Full bug description ? Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool ? Names and/or descriptions of file/data/messages/etc. used in test ? File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem ? Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) ? Was the bug reproducible? ? Tester name ? Test date ? Bug reporting date ? Name of developer/group/organization the problem is assigned to ? Description of problem cause ? Description of fix

? Code section/file/module/class/method that was fixed ? Date of fix ? Application version that contains the fix ? Tester responsible for retest ? Retest date ? Retest results ? Regression testing requirements ? Tester responsible for regression tests ? Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

1.The first stp after bug is found is t rerodce it an note down the stpes for reroductin ver clearly. 2. Then the bugs description and its stpes for reprodcingshould be psoted in tool used by team. 3. If possible peer review of the bugs should be done. 4. Bug posted should b communicaed to team or if tool has inbulit feature of cimmunicating the bugs, that is also fine. 5. After the bug is fixed, should be tken fo esting the same and any other related functionlities to the posed bugs efore closing the bug.

Posted by: radha kerur

Contact radha kerur 1}test case id 2}date of test case run 3}test data 4}steps 5}priority 6}severity 7}defect tiagia

Posted by: niranjan

Contact niranjan After a bug is found it needs to be reported to the development team to get it fix.Give the developers something to work with so that theycan reproduce the problem. 1)Give a brief description of the Problem.Explain what is wrong. 2)List the steps that are needed to reproduce the bug. 3)Supply relevant information such as: a)Version b)Product c)Data Used 4)Suppy the Documentation.If the Process is a report,include the copy of report with the problem area highlighted. 5)Summarize what you think the problem is.

Posted by: Divya Gupta

Contact Divya Gupta After finding bug repot to the Test leader with bug details like bug id,name,location,sevirity and priority.

Posted by: ashwini a.c.

Contact ashwini a.c. After finding a bug : first we check if any other test engineer send same bug or not because some time dev team reject the bug if is dupliacate if some one already send the same bug after confirming our bug is not duplicate we repare bug report 1. BUg id 2. module name 3. requriment no. 4 status of bug 5 severity 6 piritory 7. test envirnomnet 8. breif description 9. detail description

10. actual result 11. expected result 12. some case sreen shoot 13. upload to dev team to budg report

Posted by: pavitha

Contact pavitha Once a bug is found the following steps should take place: 1. the tester should report the bug to the test lead. 2. the test lead would verify the bug and if the bug is not valid then it would be rejectd. 3. if the bug is valid the test lead would report it to the development team. 4. then the development lead would verify the bug and if not valid then the development lead would assign it back to the testing team 5. if the bug is valid then the development lead would assign it to the developer . 6. once the bug is fixed by the developer he passes it to the development lead for verification again. 7. After that the development lead passes it back to the testing team for Retesting.

6. What makes a good software QA engineer

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Good QA should have: 1. Problem solving 2. Good memory power 3. Communication skills 4. No attitude 5. Good knowledge on Processes 6. Ability to understand user problems clearly 7. Capable to suggest solutions by explaining advantages of implementation

7. What is a test plan? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: 1. Title 2. Identification of software including version/release numbers 3. Revision history of document including authors, dates, approvals 4. Table of Contents 5. Purpose of document, intended audience 6. Objective of testing effort 7. Software product overview 8. Relevant related document list, such as requirements, design documents, other test plans, etc. 9. Relevant standards or legal requirements 10. Traceability requirements 11. Relevant naming conventions and identifier conventions 12. Overall software project organization and personnel/contact-info/responsibilties 13. Test organization and personnel/contact-info/responsibilities

14. Assumptions and dependencies 15. Project risk analysis 16. Testing priorities and focus 17. Scope and limitations of testing 18. Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable 19. Outline of data input equivalence classes, boundary value analysis, error classes 20. Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems 21. Test environment validity analysis - differences between the test and production systems and their impact on test validity. 22. Test environment setup and configuration issues 23. Software migration processes 24. Software CM processes 25. Test data setup requirements 26. Database setup requirements 27. Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs 28. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs 29. Test automation - justification and overview 30. Test tools to be used, including versions, patches, etc. 31. Test script/test code maintenance processes and version control 32. Problem tracking and resolution - tools and processes 33. Project test metrics to be used 34. Reporting requirements and testing deliverables 35. Software entrance and exit criteria 36. Initial sanity testing period and criteria 37. Test suspension and restart criteria 38. Personnel allocation 39. Personnel pre-training needs 40. Test site/location 41. Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues 42. Relevant proprietary, classified, security, and licensing issues. 43. Open issues

It is a strategic document which contains some information that describes how to perform testing an application in an effective, efficient and optimised way. It is a project level term which is specific for a particular project.

Posted by: navari

Contact navari its nothing but identifying ,isolating,Modifying,aand producing a defect free quality product with user friendly

Posted by: sekhar

Contact sekhar A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from Test Engineers

Posted by: kalpesh

Contact kalpesh Document that defines the overall testing approach is called as test plan.

Posted by: Subhashini

Contact Subhashini Test Plan is a high level document which intimates how a product will pass the testing activities, which will vary from product to product. Generally reputated organisations will follow the IEEE standard Test Plan format. The test plan consists about the resources utilization right from the manpower to the hardware info. This also mentions the test cases developed based on requirements, and further how to cater the changes (and updation of test cases or scripts), how test bench should be configured what are the resources required to establish the test bench, deployment structure, implementation architecture etc.

Posted by: HNV Bhaskar

Contact HNV Bhaskar Test Plan should containing the details about the testing process. It is prepeared during the project planning phases.It should be containing the details about the resources to be requred, the testing approches to be followed, and test methedologies,test case etc.

Posted by: rajisreelas

Contact rajisreelas A test plan is basically a document that has the information about the scope , people, testing approach, deliverables means test or test cases, enviroment needed any additional risks as well as testing tasks and schedule.

Posted by: abha

Contact abha Test plan is a strategy Doc describe how to perform testing on an application with effective, efficient & optimized way

Posted by: ch l kishore

Contact ch l kishore Test plan is a high level design document it should prepare by project manager or team lead

Posted by: deepthi

Contact deepthi

Test plan is a Strategic Document Which Contains Some Information That Describes How to Perform a Testing on an Application in an Effective and Efficient and Optimization Manner

Posted by: Firoz

Contact Firoz Test Plan is describes the way in witch we will show our customers that the software works correctly.

Posted by: poonam bhosale

Contact poonam bhosale Test plan describes a strategic document.This document contains ,how to perform testing an application in an efficient manner& optimized way. Optimization:--It is a process of utilizing the available resources to their level best and getting the maximum possible output.

Posted by: Ajanta Lakshmi

Contact Ajanta Lakshmi Test Plan is a document which describes about the object,scope,resources & nature of the project.

Posted by: swaroop

Contact swaroop A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code

or making additional changes. Some companies have a higher-level document called a test strategy.

Posted by: vani

Contact vani after complition of test strategy doc preparation,corresponding test tool will concentrate on test plan 1.team formation 2.identify tactical risks 3.test plans 4.test review TEAM FORMATION: test planning starts with team formation.in this test lead can fallow suset a.project size b.project time c.project resourse these are called work break down strucher. identify tactical risks: after completion of team formation test will concentrate on identical tactical risks .lack of knowledge on current project .lack of resourse .lack of time .lack of documentation .lack of development sid seriousness .lask of communication TESTPLAN: after complition of above 2,test laead will concentrate on test plann in ieee 829 format in this format will concentrate these subsete what to test how to test when to test REVIEW TEST PLAN after completion of test plan corresponding test lead can review those documents along with pm for completeness and correctness...

8.

What is verification? Validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

Verification :- IS the product build right. Validation :- Is the right product build.

Posted by: Devanand

Contact Devanand verification is the process which is done before code is developed.Different verification techniques include reviews, inspections, walkthroughs.. validation is the process which is done after code is developed . eg: testing

Posted by: satya

Contact satya verification--wether the sysyem is right or not validation--wether the system is right system or not

Posted by: pavan

Contact pavan Are we building product right? and Are we building right product?

Posted by: shankar

Contact shankar verification: verification is the process which determines whether the product being developed is in the right way or not. Validation: is the process to verify whether the developed product is right.

Posted by: pavan

Contact pavan Verification : Are we building the right product.( requirement review,design review, code walkthrough) Validation : Are we building the product right.(Actual testing during STLC)

9. How ca world wide web sites be tested? ? What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? ? Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? ? What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)? ? Will down time for server and content maintenance/upgrades be allowed? how much? ? What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? ? How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?

? What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? ? Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? ? Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? ? How will internal and external links be validated and updated? how often? ? Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing? ? How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? ? How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section. Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section): ? Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page. ? The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site. ? Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type. ? All pages should have links external to the page; there should be no dead-end pages. ? The page owner, revision date, and a link to a contact person or organization should be included on each page.

to check the load of website in diffrent browsers ,to test the stress testing to using how can be load control of website

10.

What if the application has functionality that wasnt in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

That functionality need to be discussed within the team (QA and Development) to measure the impact on the application and also indicating the risk. if there is not enough time left then better to remove the function and declare it has enhancement for the future release.

11.

How does a clint/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

12.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: ? Deadlines (release deadlines, testing deadlines, etc.) ? Test cases completed with certain percentage passed ? Test budget depleted ? Coverage of code/functionality/requirements reaches a specified point ? Bug rate falls below a certain level ? Beta or alpha testing period ends

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: ? Deadlines (release deadlines, testing deadlines, etc.) ? Test cases completed with certain percentage passed ? Test budget depleted ? Coverage of code/functionality/requirements reaches a specified point ? Bug rate falls below a certain level ? Beta or alpha testing period ends

Posted by: sudhakar

Contact sudhakar Testing should be stopped when 1.Deadlines e.g. release deadlines 2.Bug rate falls below certain level 3.Testcase completed with certain percentage passed 4.alpha and beta testing period ends. 5.Test Budget depleted

Posted by: Namrata Shah

Contact Namrata Shah when the software meet the user requirenment specification and complete the user acceptance testing in this complete beta testing that time stop the testing.

Posted by: kalpesh

Contact kalpesh Testing can be stopped when all the functionalities are satisfied and there are no bugs found in the application

Posted by: saravanaraj

Contact saravanaraj Exit criteria can be defined during the test planning stage depending upon the schedules( available time frame for testing),complexity of code and business impact,budget and the test coverage along with the predefined approximate defect closure information(for example there should not be any open sev1 or sev2 issues(code defects open) before the production deployment)and based on this we can decide when to stop or complete the testing activity.

Posted by: anjali k

Contact anjali k 1.if test engineer feels that the product is defect free. 2.when the dead lines comes. 3.we stop testing after completion of customer acceptance test

Posted by: suresh

Contact suresh

When to stop testing "is one of the most difficult questions to a test engineers.the following are some important and common test stop criteria on the basis of that one can stop testing. 1-All the high priority bugs are fixed. 2-The rate at which bugs are found is become too small. 3-The budget of testing is exhausted. 4-The project duration is completed. 5-the risk in the project is under acceptable limit. In other word practically ,we feel that the decision of stopping testing is based on the level of the risk acceptable to the management.

Posted by: Rituraj Polak

Contact Rituraj Polak when accepted criteria document will match with the present project document in that time we will stop testing

Posted by: shiva leela

Contact shiva leela few impotent points 1-when All the high priority bugs are fixed. 2-The budget of testing is exhausted. 3-The project duration is completed. 4-the risk in the project is under acceptable limit.

13. What is a test case? A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking

through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

Test case can be defined as Set of input parameters for software is tested.

Posted by: Sanky

Contact Sanky Test Case means A Case that validate a particular condition.

Posted by: Gautam

Contact Gautam Test Case is a set of steps with user action and subsequent response from the system in terms of step name, step description and expected result. STEP Name| Step Desc| Expected Result ||

Posted by: Smruti Sabat

Contact Smruti Sabat Test Case is a statement that defines WHAT specific function/feature of the application will be tested by the test. System Requirements serve as good inputs for creating Test Cases.

Posted by: maheswar reddy

Contact maheswar reddy

Test cases is a sequence of steps to test the correct behavior of a functionality/feature of an application A test case should have an input description, Test Sequence and an Expected Behaviour. ex., Test Sequence: Schedule a report [ This can be treated as a Title as well as Test Sequence. Sequence here is the order in which this step is to be executed. The test case document should be prepared in such a way that the test cases should follow a sequence] Test Input Description: 1.Login to <Abc page> as administrator. 2. Go to Reports page 3. Click on the ?Schedule reports' button 4. Add reports 5. Update Expected Results: The report schedule should get added to the report schedule table. Provisioning status of the reports should get handled

Posted by: Kiran Antony

Contact Kiran Antony test case is set of inputs given to specific build in order to compare the expected results of customer with actual values observed from build

Posted by: siva

Contact siva test cases should have following datas: 1.Test case ID 2.Unit of test case(what to be varified) 3.Test data(variables and their values) 4.Steps to be executed 5.Expected result 6.Actual result

7. Pass/Fail 8.Comment

Posted by: akanksha panday

Contact akanksha panday Test case is a written down document by referring to which we can validate the application 14. What is a TRM?

TRM means Test Responsibility Matrix. TRM: --- It indicates mapping between test factors and development stages... Test factors like: Ease of use, reliability, portability, authorization, access control, audit trail, ease of operates, maintainable... Like dat... Development stages... Requirement gathering, Analysis, design, coding, testing, and maintenance 15. What is agile testing?

Agile testing is used whenever customer requirements are changing dynamically If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process. Test case would have detail steps of what the application is supposed to do. 1) Functionality of application. 2) In addition you can refer to Backend, is mean look into the Database. To gain more knowledge of the application.

16.

In what basis you will write test cases?

I would write the Test cases based on Functional Specifications and BRDs and some more test cases using the Domain knowledge.

i will write the test cases by understanding the user requirements clearly after that i write the test cases

17.

For web applications what type of testing you are going to do?

Web-based applications present new challenges, these challenges include: - Short release cycles; - Constantly Changing Technology; - Possible huge number of users during initial website launch; - Inability to control the user's running environment; - 24-hour availability of the web site. The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use-will compel the user to click to a competitor's site. Such problems translate into lost of users, lost sales, and poor company image. To overcome these types of problems, use the following techniques: 1. Functionality Testing Functionality testing involves making Sure the features that most affect user interactions work properly. These include: ? forms ? searches ? pop-up windows ? shopping carts ? online payments 2. Usability Testing Many users have low tolerance for anything that is difficult to use or that does not work. A user's first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general-use websites frustrated users can easily click over a competitor's site. Usability testing involves following main steps ? identify the website's purpose;

? identify the indented users ; ? define tests and conduct the usability testing ? analyze the acquired information 3. Navigation Testing Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing. 4. Forms Testing Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer. 5. Page Content Testing Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct. 6. Configuration and Compatibility testing A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser/platform combination to ensure the web sites work properly under various environments. 7. Reliability and Availability Testing A key requirement o a website is that it Be available whenever the user requests it, after 24-hours a day, every day. The number of users accessing web site simultaneously may also affect the site's availability. 8. Performance Testing Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor's site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters. 9. Load Testing The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server. 10. Stress Testing

Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days. 11. Security Testing Security is a primary concern when communicating and conducting businessespecially sensitive and business- critical transactions - over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.

18. What are the main key components in web applications and aclient and server applications? ( differences)? Answers: For Web Applications: Web application can be implemented using any kind of technology like Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can derive the components. Let's take Java Web Application. It can be implemented in 3 tier architecture. Presentation tier (jsp, html, dthml,servlets, struts). Busienss Tier (Java Beans, EJB, JMS) Data Tier(Databases like Oracle, SQL Server etc., ) If you take .NET Application, Presentation (ASP, HTML, DHTML), Business Tier (DLL) & Data Tier ( Database like Oracle, SQL Server etc.,) Client Server Applications: It will have only 2 tiers. One is Presentation (Java, Swing) and Data Tier (Oracle, SQL Server). If it is client Server architecture, the entire application has to be installed on the client machine. When ever you do any changes in your code, Again, It has to be installed on all the client machines. Where as in Web Applications, Core Application will reside on the server and client can be thin Client(browser). Whatever the changes you do, you have to install the application in the server. NO need to worry about the clients. Because, You will not install any thing on the client machine.

19.

Smoke test? Do you use any automation tool for smoke testing?

Testing the application whether it's performing its basic functionality properly or not, so that the test team can go ahead with the application. Definitely can use.

20.

What are the differences between these words Error, Defect and Bug?

Error: The deviation from the required logic, syntax or standards/ethics is called as error. There are three types of error. They are: Syntax error (This is due to deviation from the syntax of the language what supposed to follow). Logical error (This is due to deviation from the logic of the program what supposed to follow) Execution error (This is generally happens when you are executing the same program, that time you get it.) Defect: When an error found by the test engineer (testing department) then it is called defect Bug: if the defect is agreed by the developer then it converts into bug, which has to fix by the developer or post pond to next version.

21. Password is having 6 digit alphanumeric then what are the possible input conditions? Including special characters also Possible input conditions are: 1) Input password as = 6abcde (ie number first) 2) Input password as = abcde8 (ie character first) 3) Input password as = 123456 (all numbers) 4) Input password as = abcdef (all characters) 5) Input password less than 6 digit 6) Input password greater than 6 digits 7) Input password as special characters 8) Input password in CAPITAL ie uppercase 9) Input password including space 10) (SPACE) followed by alphabets /numerical /alphanumerical/ 21.a Define Brain Stromming and Cause Effect Graphing?

BS: A learning technique involving open group discussion intended to expand the range of available ideas OR A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bimonthly brainstorming sessions are held by various work groups within the firm.

Our monthly I-Power brainstorming meeting is attended by the entire agency staff. OR Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes). CEG: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

21.b

Explain Software metrics?

Measurement is fundamental to any engineering discipline Why Metrics? - We cannot control what we cannot measure! - Metrics helps to measure quality - Serves as dash-board The main metrices are :size,shedule,defects.In this there are main sub metrices. Test Coverage = Number of units (KLOC/FP) tested / total size of the system Test cost (in %) = Cost of testing / total cost *100 Cost to locate defect = Cost of testing / the number of defects located Defects detected in testing (in %) = Defects detected in testing / total system defects*100 Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

21.c What is Testing environment in your company, means how testing process start? Testing process is going as follows: Quality assurance unit Quality assurance manager Test lead Test engineer 21.d When a bug is found, what is the first action?

Report it in bug tracking tool.

21.e

What is Bug life cycle?

New: when tester reports a defect Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into "Rejected" Fixed: when developer make changes to the code to rectify the bug... Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into "Closed" and if the problem persists again, it's "Reopen".

For a Valid Defect 1. New 2. Open 3. Ready for Retest/ Fixed 4. If the issue is fixed, then change the status to close else reopen the defect and inform the dev team regarding the same to rework on the defect. 5. Final Status is Closed For a invalid Defect: 1. New. 2. Open. 3. Rejected.

21.f Give an example of high priority and low severity, low priority and high severity? Severity level: The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact. Severity is levels: * Critical: the software will not run * High: unexpected fatal errors (includes crashes and data corruption) * Medium: a feature is malfunctioning * Low: a cosmetic issue Severity levels

1. Bug causes system crash or data loss. 2. Bug causes major functionality or other severe problems; product crashes in obscure cases. 3. Bug causes minor functionality problems, may affect "fit anf finish". 4. Bug contains typos, unclear wording or error messages in low visibility fields. Severity levels * High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue. * Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue. * Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption. Severity and Priority Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It?s relative. It shifts over time. And it?s a business decision. Severity is an absolute: it?s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it?s still a high severity issue when it?s deferred to the next release. The severity hasn?t changed just because we?ve run out of time. The priority changed. Severity Levels can be defined as follow: S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window. Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes. S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a

work-around which negates impact to business process. This is a problem that: a) Affects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or is intermittent S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

21.g

How you are breaking down the project among team members?

It can be depend on these following cases---1) Number of modules 2) Number of team members 3) Complexity of the Project 4) Time Duration of the project 5) Team member's experience etc......

21.h

What is test plan and explain its contents?

Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

test plan contents are: 1) Introduction 2) coverage of testing 3) test strategy 4) Base criteria 5) Test deliverable 6) Test environment

7) scheduling 8) Staffing and Training 9) Risks and solution plan 10) Assumptions 11) Approval information

21.i

Diff. between STLC and SDLC?

STLC is software test life cycle it starts with * Preparing the test strategy. * Preparing the test plan. * Creating the test environment. * Writing the test cases. * Creating test scripts. * Executing the test scripts. * Analyzing the results and reporting the bugs. * Doing regression testing. * Test exiting. SDLC is software or system development life cycle, phases are... * Project initiation. * Requirement gathering and documenting. * Designing. * Coding and unit testing. * Integration testing. * System testing. * Installation and acceptance testing. " Support or maintenance.

21.j

What the main use of preparing a traceability matrix?

Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application. (Or) To Cross verify the prepared test cases and test scripts with user requirements. To monitor the changes, enhance occurred during the development of the project.

IT is document which contain table of linking information for tracing back for reference in any kind of confusions and questionability situations

21.k

Advantages of automation over manual testing?

we test the application through manual then the reliability is not 100% because if we do testing 1000 times then we tested only 500 to 600 times but in Automation,its give you a exactly output. Though automation we do faster,repeatable, comprehensive testing. 21.l What is deferred status in defect life cycle?

Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next build.

21.m

What is the maximum length of the test case we can write?

We can't say exactly test case length, it depending on functionality.

21.n There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-minutes we have to calculate with this timers and bang the bell after completion of 11- minutes!plz give me the solution. Answers: 1. Start both clocks 2. When 7 min clock complete, turn it so that it restarts. 3. When 9 min clock finish, turn 7 min clocks (It has 2 mints only). 4. When 7 min clock finishes, 11 min complete.

21.o What are the main bugs which were identified by you and in that how many are considered as real bugs?

f you take one screen, let's say, it has got 50 Test conditions, out of which, I have identified 5 defects which are failed. I should give the description defect, severity and defect classfication. All the defects will be considered. Defect Classification are: GRP : Graphical Representation LOG : Logical Error DSN : Design Error STD : Standard Error TST : Wrong Test case TYP : Typographical Error (Cosmotic Error)

21.p

Actually how many positive and negetive testcases will write for a module?

That depends on the module and complexity of logic. For every test case, we can identify +ve and -ve points. Based on the criteria, we will write the test cases, If it is crucial process or screen. We should check the screen,in all the boundary conditions.

21.q

What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are messages that web servers pass to your web browser when you visit Internet sites. Your browser stores each message in a small file. When you request

another page from the server, your browser sends the cookie back to the server. These files typically contain information about your visit to the web page, as well as any information you've volunteered, such as your name and interests. Cookies are most commonly used to track web site activity. When you visit some sites, the server gives you a cookie that acts as your identification card. Upon each return visit to that site, your browser passes that cookie back to the server. In this way, a web server can gather information about which web pages are used the most, and which pages are gathering the most repeat hits. Only the web site that creates the cookie can read it. Additionally, web servers can only use information that you provide or choices that you make while visiting the web site as content in cookies. Accepting a cookie does not give a server access to your computer or any of your personal information. Servers can only read cookies that they have set, so other servers do not have access to your information. Also, it is not possible to execute code from a cookie, and not possible to use a cookie to deliver a virus.

21.r

What is mean by release notes?

It's a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status.

21.s

What is Test Data Collection?

Test data is the collection of input data taken for testing the application. Various types and size of input data will be taken for testing the applications. Sometimes in critical application the test data collection will be given by the client also.

21.t

What is bidirectional traceability?

Bidirectional traceability needs to be implemented both forward and backward (i.e., from requirements to end products and from end product back to requirements). When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source.

21.u

What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are messages that web servers pass to your web browser when you visit Internet sites. Your browser stores each message in a small file. When you request another page from the server, your browser sends the cookie back to the server. These files typically contain information about your visit to the web page, as well as any information you've volunteered, such as your name and interests. Cookies are most commonly used to track web site activity. When you visit some sites, the server gives you a cookie that acts as your identification card. Upon each return visit to that site, your browser passes that cookie back to the server. In this way, a web server can gather information about which web pages are used the most, and which pages are gathering the most repeat hits. Only the web site that creates the cookie can read it. Additionally, web servers can only use information that you provide or choices that you make while visiting the web site as content in cookies. Accepting a cookie does not give a server access to your computer or any of your personal information. Servers can only read cookies that they have set, so other servers do not have access to your information. Also, it is not possible to execute code from a cookie, and not possible to use a cookie to deliver a virus.

21. a.a

How many types object repository in QTP?

There are two types of objrct repositry in QTP. 1.object repositry per action mode. 2.Global object repositry.

there are two types of object repository in qtp 1)LOCAL 2)SHARED

21.a.b

How can we generate script for exception handler in QTP?

Please can any one explain script also? We can generate the script for exceptions using recovery scenario manager or with in the script as well. 1)using recovery script get the properties of the exception object and add it in the resources by naming the exception. We can create the recovery scenario in the options and add it in the resources of the file menu.when generating the script after the exception comes in type wait() please specify some umber like 10 or any value in() so that it will wait and then add the recovery scenario over there ad continue with the script 2)while writing the script with out using the recovery scenario use when event occurs goto step umber and add the recovery scenario by what ever the type of framework that you are using.

21.a.c What is action split and the purpose of using this in QTP? Also please explain calling an action. How is this done and why is it done? split action is mainly used for re-usability of the application. action split is nothing but spliting a action in to two parts saperatelly. on calling an action go 2 call action properties and select call existing action.

for suppose use have save a file in your desktop then this property comes in picture.

21.a.d

I would appreciate your early replys

If u get that errror, please click on the Details Option in the Pop UP message that comes, Then u will know where the error is. After that Go to Object Repository, and Highlight the particular object,if it highlights , then there may be problem in ur script.Otherwise , Add the Object to the Repository by clicking on the Add Objects Button..

21.a.e

How to pass parameters from one test to another test in QTP

am giving one exmple passing the values into 2 actions Step1 : U will select the Action1 Properties and set input and output parameters 'Action1 Script Dim sum sum = Parameter("t") 'Input parameter msgbox(sum) Parameter("a")=89 'Output Parameter Step2 : Go to Action2 'Action2 Script RunAction "Action1", oneIteration,"20",x Msgbox(x) ' First Action Output parameter comes to Action2 Automatically exitrun()

21.a.f

What does it mean when a check point is in red color? what do u do?

A red color indicates failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a Application issue.

21.a.g

How do we connect to Oracle database from QTP ?

Either we can user Microsoft Query Wizard,or we can connect by harcoding the DB Connection string and all in Expert view. We have to create a new DSN or we can use the existing DSN to connect to the database

Vous aimerez peut-être aussi