Vous êtes sur la page 1sur 6

1. What are automated tests?

Tests are simple routines that check the operation of your code. Testing operates at different levels.
Some tests might apply to a tiny detail (does a particular model method return values as expected?)
while others examine the overall operation of the software (does a sequence of user inputs on the
site produce the desired result?). Thats no different from using the shell to examine the behavior of
a method, or running the application and entering data to check how it behaves. Whats different in
automated tests is that the testing work is done for you by the system. You create a set of tests once,
and then as you make changes to your app, you can check that your code still works as you
originally intended, without having to perform time consuming manual testing.

2. Why you need to create tests?


Tests will save you time. In a more sophisticated application, you might have dozens of complex
interactions between components. A change in any of those components could have unexpected
consequences on the applications behavior. Checking that it still seems to work could mean
running through your codes functionality with twenty different variations of your test data just to
make sure you havent broken something - not a good use of your time. Thats especially true when
automated tests could do this for you in seconds. If somethings gone wrong, tests will also assist in
identifying the code thats causing the unexpected behavior. Tests also make your code more
attractive as: code without tests is broken by design.

3. Define testing methods!


Black-box testing: testing the functionality of the software without knowing it's internal code
structure. White-box testing: testing the software itself with the knowledge of it's internal structure.
Basically you know the code and write the tests to test the code.

4. What are unit test, integration test, system test, regression test, acceptance test? What is the
major difference between these?
Unit test: a level of testing where smallest part of individual component (called unit) is tested to
determine if they are fit for use. The unit test cases writing and execution is done by the developer
(not the tester) to make sure that individual units are working as expected. The smallest part of
individual components like functions, procedures, classes, interfaces, etc. If we take an example of
functions, when we pass input parameters to functions then check if the function should return a
expected values. The main intention of this activity is to check whether units are working as per
design and handling error and exception more neatly. The both positive and negative conditions
should handle properly. This is very first step in level of testing and started before doing integration
testing. Since this testing requires knowledge of code therefore it is known as white box testing.
Integration test: tests the correct inter-operation of multiple subsystems. There is whole spectrum
there, from testing integration between two classes, to testing integration with the production
environment.
System test: the type of testing to check the behaviour of a complete and fully integrated software
product based on the software requirements specification (SRS) document. The main focus of this
testing is to evaluate Business / Functional / End-user requirements.
Acceptance test: tests that a feature or use case is correctly implemented. It is similar to an
integration test, but with a focus on the use case to provide rather than on the components involved.

5. What is the difference between functional and non-functional testing? Give an example.
Functional testing involves testing the application against the business requirements. The goal of
functional testing is to verify that the application is behaving the way it was designed to. Functional
testing ensures that your software is ready for release to the public. It also verifies that all the
specified requirements have been incorporated. There are two major categories of functional
testing: positive and negative functional testing. Positive functional testing involves inputting valid
inputs to see how the application responds to these and also testing to determine if outputs are
correct. Negative functional testing involves using different invalid inputs, unanticipated operating
conditions and other invalid operations. While functional testing is concerned about business
requirements, non-functional testing is designed to figure out if your product will provide a good
user experience. For example, non-functional tests are used to determine how fast the product
responds to a request or how long it takes to do an action. The major difference between functional
and non-functional testing is: functional testing ensures that your product meets customer and
business requirements, and doesnt have any major bugs. On the other hand, non-functional testing
wants to see if the product stands up to customer expectations. Basically, functional testing is
designed to determine that the applications features and operations perform the way they should.
Non-functional testing wants to know that the product behaves correctly.

6. What is code coverage? Why is it used? How you can measure?


Code coverage measures how much code of the application is being exercised when the tests are
run. This makes sure that the tests being run are actually testing the code of application. Code
coverage is a technique to measure how much the test covers the software and how much part of the
software is not covered under the test. The tester is able to find out what features of the software are
exercised by the code. Using the code coverage technique and number of bugs in the application we
can build confidence upon the system on its quality and functioning. Test code coverage measures
minimum number of test cases which needs to be executed to provide confidentiality in the system.
Code coverage is collected by using a specialized tool to instrument the binaries to add tracing calls
and run a full set of automated tests against the instrumented product. A good tool will give you not
only the percentage of the code that is executed, but also will allow you to drill into the data and see
exactly which lines of code were executed during particular test. While code coverage is a good
metric of how much testing you are doing, it is not necessarily a good metric of how well you are
testing your product. There are other metrics you should use along with code coverage to ensure the
quality. Coverage = Number of coverage items exercised / Total number of coverage items *100%.

7. What are JUnit and Mockito?


JUnit is a test framework for Java. It is used for both unit and integration tests.
Mockito is a mocking framework that lets you write beautiful tests with a clean and simple API.

8. What does mocking mean? How would you do it 'manually' (i. e. without using any fancy
framework)?
If you look up the noun mock in the dictionary you will find that one of the definitions of the word
is something made as an imitation. Mocking is primarily used in unit testing. A unit under test
may have dependencies on other (complex) units. To isolate the behavior of the unit you want to
test you replace the other units by mocks that simulate the behavior of the real units. This is useful
if the real units are impractical to incorporate into the unit test. In short, mocking is creating units
that simulate the behaviour of real units.
9. What is the difference between stub and mock?
A stub is a "minimal" simulated unit. The stub implements just enough behavior to allow the unit
under test to execute the test. A mock is like a stub but the test will also verify that the unit under
test calls the mock as expected. Part of the test is verifying that the mock was used correctly. To
give an example: You can stub a database by implementing a simple in-memory structure for storing
records. The unit under test can then read and write records to the database stub to allow it to
execute the test. This could test some behavior of the unit not related to the database and the
database stub would be included just to let the test run. If you instead want to verify that the unit
under test writes some specific data to the database you will have to mock the database. Your test
would then incorporate assertions about what was written to the database mock. A stub objects
functionality is to return predefined values to different calls, it tests a certain state. A mock object,
on the other hand, is used to test the behavior (for example, whether certain methods are called).

10. What is a test case? What is an assertion? Give examples!


A test case is a document, which has a set of test data, preconditions, expected results and
postconditions, developed for a particular test scenario in order to verify compliance against a
specific requirement. Test case acts as the starting point for the test execution, and after applying a
set of input values, the application has a definitive outcome and leaves the system at some end point
or also known as execution postcondition.
An assertion is a boolean expression at a specific point in a program which will be true unless there
is a bug in the program. A test assertion is defined as an expression, which encapsulates some
testable logic specified about a target under test.

11. What is TDD? What are the benefits?


Test-driven development (TDD) is a software development process that relies on the repetition of a
very short development cycle: Requirements are turned into very specific test cases, then the
software is improved to pass the new tests, only. Test-driven development is related to the test-first
programming concepts of extreme programming. First, write the test; then run it (red); then make it
work (green); then make it right (refactor).

Faster feedback: the team will have almost immediate feedback on the components they develop
and test. Higher acceptance: the team only accepts that implementation, when the predefined (based
on the user stories) tests are all green. Avoid scope creep: the test cases or unit test drivers define the
exact set of required features. TDD makes it easy to identify redundant code, detect and terminate
unnecessary engineering tasks. Customer-centric: the iteration can be defined as the implementation
of functionality to execute against a pre-defined set of test cases. Modulation: the team is forced to
think in small units that can be can be written and tests independently.

(Add new tests. In test-driven development, each new feature begins with writing a test. Write a test
that defines a function or improvements of a function, which should be very succinct. To write a
test, the developer must clearly understand the feature's specification and requirements. The
developer can accomplish this through use cases and user stories to cover the requirements and
exception conditions, and can write the test in whatever testing framework is appropriate to the
software environment. It could be a modified version of an existing test. This is a differentiating
feature of test-driven development versus writing unit tests after the code is written: it makes the
developer focus on the requirements before writing the code, a subtle but important difference. The
next step is to write some code that causes the test to pass. At this point, the only purpose of the
written code is to pass the test. The programmer must not write code that is beyond the functionality
that the test checks. Run tests. If all test cases now pass, the programmer can be confident that the
new code meets the test requirements, and does not break or degrade any existing features. If they
do not, the new code must be adjusted until they do. Refactor code. The growing code base must be
cleaned up regularly during test-driven development. New code can be moved from where it was
convenient for passing a test to where it more logically belongs. Duplication must be removed.
Object, class, module, variable and method names should clearly represent their current purpose and
use, as extra functionality is added. As features are added, method bodies can get longer and other
objects larger. They benefit from being split and their parts carefully named to improve readability
and maintainability, which will be increasingly valuable later in the software lifecycle. Inheritance
hierarchies may be rearranged to be more logical and helpful, and perhaps to benefit from
recognized design patterns. There are specific and general guidelines for refactoring and for
creating clean code. By continually re-running the test cases throughout each refactoring phase, the
developer can be confident that process is not altering any existing functionality. Repeat.)

12. What is BDD? What are the benefits?


Behavior-driven development (BDD) is a software development methodology in which an
application is specified and designed by describing how its behavior should appear to an outside
observer. Behavior-driven development should be focused on the business behaviors your code is
implementing: the why behind the code. Instead of coding the functions, you tell the code exactly
what you want it to do by using a style that is closer to our way of writing sentences. You get a
clearer understanding of what the system should do from the perspective of the developer and the
customer while TDD only gives the developer an understanding of what the system should do. That
means that BDD enables the developers and the customers to work together on the requirements
analysis that is contained within the source code of the system.

Strong collaboration: with BDD, all the involved parties have a strong understanding of the project
and they can all have a role in communication and actually have constructive discussions. High
visibility: by using a language understood by all, everyone gets strong visibility into the projects
progression. Software development meets user need. The software design follows business value.

13. What are the unit testing best practices? (Eg. how many assertion should a test case
contain?)
Keep in mind: you can choose any kind of testing method, it won't be useful if you are using it
wrong. For example if you are writing a code coverage test but not checking the results well in your
test, the test is worthless. Here are some points without the sake of completeness:
Never push a failing test to the repository. (Use @Ignore if really necessary.)
Use separate folder for tests as you might not want to deliver (ie. give to the user) your test
along with the production code.
Give descriptive names to test methods so that you can see what fails easier (note:
underscore is allowed in JUnit test method's name).
Write tests for the error cases and corner cases.
Check only a single thing in one test method (practically: use one assert per test method).
Use assert (or expected exception) in all of the test methods.
Use the expected result as the first argument of assertEquals(expected, actual).
There is no sense of assert if you expect exception.

Vous aimerez peut-être aussi