Académique Documents
Professionnel Documents
Culture Documents
Tests are simple routines that check the operation of your code. Testing operates at different levels.
Some tests might apply to a tiny detail (does a particular model method return values as expected?)
while others examine the overall operation of the software (does a sequence of user inputs on the
site produce the desired result?). Thats no different from using the shell to examine the behavior of
a method, or running the application and entering data to check how it behaves. Whats different in
automated tests is that the testing work is done for you by the system. You create a set of tests once,
and then as you make changes to your app, you can check that your code still works as you
originally intended, without having to perform time consuming manual testing.
4. What are unit test, integration test, system test, regression test, acceptance test? What is the
major difference between these?
Unit test: a level of testing where smallest part of individual component (called unit) is tested to
determine if they are fit for use. The unit test cases writing and execution is done by the developer
(not the tester) to make sure that individual units are working as expected. The smallest part of
individual components like functions, procedures, classes, interfaces, etc. If we take an example of
functions, when we pass input parameters to functions then check if the function should return a
expected values. The main intention of this activity is to check whether units are working as per
design and handling error and exception more neatly. The both positive and negative conditions
should handle properly. This is very first step in level of testing and started before doing integration
testing. Since this testing requires knowledge of code therefore it is known as white box testing.
Integration test: tests the correct inter-operation of multiple subsystems. There is whole spectrum
there, from testing integration between two classes, to testing integration with the production
environment.
System test: the type of testing to check the behaviour of a complete and fully integrated software
product based on the software requirements specification (SRS) document. The main focus of this
testing is to evaluate Business / Functional / End-user requirements.
Acceptance test: tests that a feature or use case is correctly implemented. It is similar to an
integration test, but with a focus on the use case to provide rather than on the components involved.
5. What is the difference between functional and non-functional testing? Give an example.
Functional testing involves testing the application against the business requirements. The goal of
functional testing is to verify that the application is behaving the way it was designed to. Functional
testing ensures that your software is ready for release to the public. It also verifies that all the
specified requirements have been incorporated. There are two major categories of functional
testing: positive and negative functional testing. Positive functional testing involves inputting valid
inputs to see how the application responds to these and also testing to determine if outputs are
correct. Negative functional testing involves using different invalid inputs, unanticipated operating
conditions and other invalid operations. While functional testing is concerned about business
requirements, non-functional testing is designed to figure out if your product will provide a good
user experience. For example, non-functional tests are used to determine how fast the product
responds to a request or how long it takes to do an action. The major difference between functional
and non-functional testing is: functional testing ensures that your product meets customer and
business requirements, and doesnt have any major bugs. On the other hand, non-functional testing
wants to see if the product stands up to customer expectations. Basically, functional testing is
designed to determine that the applications features and operations perform the way they should.
Non-functional testing wants to know that the product behaves correctly.
8. What does mocking mean? How would you do it 'manually' (i. e. without using any fancy
framework)?
If you look up the noun mock in the dictionary you will find that one of the definitions of the word
is something made as an imitation. Mocking is primarily used in unit testing. A unit under test
may have dependencies on other (complex) units. To isolate the behavior of the unit you want to
test you replace the other units by mocks that simulate the behavior of the real units. This is useful
if the real units are impractical to incorporate into the unit test. In short, mocking is creating units
that simulate the behaviour of real units.
9. What is the difference between stub and mock?
A stub is a "minimal" simulated unit. The stub implements just enough behavior to allow the unit
under test to execute the test. A mock is like a stub but the test will also verify that the unit under
test calls the mock as expected. Part of the test is verifying that the mock was used correctly. To
give an example: You can stub a database by implementing a simple in-memory structure for storing
records. The unit under test can then read and write records to the database stub to allow it to
execute the test. This could test some behavior of the unit not related to the database and the
database stub would be included just to let the test run. If you instead want to verify that the unit
under test writes some specific data to the database you will have to mock the database. Your test
would then incorporate assertions about what was written to the database mock. A stub objects
functionality is to return predefined values to different calls, it tests a certain state. A mock object,
on the other hand, is used to test the behavior (for example, whether certain methods are called).
Faster feedback: the team will have almost immediate feedback on the components they develop
and test. Higher acceptance: the team only accepts that implementation, when the predefined (based
on the user stories) tests are all green. Avoid scope creep: the test cases or unit test drivers define the
exact set of required features. TDD makes it easy to identify redundant code, detect and terminate
unnecessary engineering tasks. Customer-centric: the iteration can be defined as the implementation
of functionality to execute against a pre-defined set of test cases. Modulation: the team is forced to
think in small units that can be can be written and tests independently.
(Add new tests. In test-driven development, each new feature begins with writing a test. Write a test
that defines a function or improvements of a function, which should be very succinct. To write a
test, the developer must clearly understand the feature's specification and requirements. The
developer can accomplish this through use cases and user stories to cover the requirements and
exception conditions, and can write the test in whatever testing framework is appropriate to the
software environment. It could be a modified version of an existing test. This is a differentiating
feature of test-driven development versus writing unit tests after the code is written: it makes the
developer focus on the requirements before writing the code, a subtle but important difference. The
next step is to write some code that causes the test to pass. At this point, the only purpose of the
written code is to pass the test. The programmer must not write code that is beyond the functionality
that the test checks. Run tests. If all test cases now pass, the programmer can be confident that the
new code meets the test requirements, and does not break or degrade any existing features. If they
do not, the new code must be adjusted until they do. Refactor code. The growing code base must be
cleaned up regularly during test-driven development. New code can be moved from where it was
convenient for passing a test to where it more logically belongs. Duplication must be removed.
Object, class, module, variable and method names should clearly represent their current purpose and
use, as extra functionality is added. As features are added, method bodies can get longer and other
objects larger. They benefit from being split and their parts carefully named to improve readability
and maintainability, which will be increasingly valuable later in the software lifecycle. Inheritance
hierarchies may be rearranged to be more logical and helpful, and perhaps to benefit from
recognized design patterns. There are specific and general guidelines for refactoring and for
creating clean code. By continually re-running the test cases throughout each refactoring phase, the
developer can be confident that process is not altering any existing functionality. Repeat.)
Strong collaboration: with BDD, all the involved parties have a strong understanding of the project
and they can all have a role in communication and actually have constructive discussions. High
visibility: by using a language understood by all, everyone gets strong visibility into the projects
progression. Software development meets user need. The software design follows business value.
13. What are the unit testing best practices? (Eg. how many assertion should a test case
contain?)
Keep in mind: you can choose any kind of testing method, it won't be useful if you are using it
wrong. For example if you are writing a code coverage test but not checking the results well in your
test, the test is worthless. Here are some points without the sake of completeness:
Never push a failing test to the repository. (Use @Ignore if really necessary.)
Use separate folder for tests as you might not want to deliver (ie. give to the user) your test
along with the production code.
Give descriptive names to test methods so that you can see what fails easier (note:
underscore is allowed in JUnit test method's name).
Write tests for the error cases and corner cases.
Check only a single thing in one test method (practically: use one assert per test method).
Use assert (or expected exception) in all of the test methods.
Use the expected result as the first argument of assertEquals(expected, actual).
There is no sense of assert if you expect exception.