Unit testing

From Knowledge Kitchen
Jump to: navigation, search


Kent Beck, originator of unit testing

Unit testing is an automated testing technique developed in 1992 by Kent Beck to solve the annoyance of manually testing code for obvious errors while working with the Smalltalk language.

  • this then was developed into JUnit by Beck and a fellow passenger during the course of an airplane flight from Zurich to Upsalla in 1997, since Java had become a hot language by then and Kent Beck wanted to learn it
  • he gave the new product to Martin Fowler, who then shared the project with others and the rest is history

JUnit ports and the xUnit ecosystem

  • most other unit testing frameworks in other languages today are based on the concepts in JUnit
  • the collection of unit testing frameworks based on JUnit, implemented in various languages, is collectively known as xUnit
  • these are primarily open source, since JUnit is open source

Thus, unit testing frameworks in all languages share a common history and common features:

  • the testing language is written in the same language as what is being tested
  • rather than printing out values to make sure they are correct as the program executes, unit testing frameworks automatically detect correctness by comparing actual values to expected values
  • they differ from integration tests in that they test one unit at a time
  • the same frameworks can be used for integration tests by combining multiple units in a single test

Advantages of automated testing

  • less time consuming and tedious
  • provides immediate feedback of failures
  • allows developers to focus more on writing code
  • requires fewer human resources
  • more reliable than manual testing

Test-driven development

Test-driven development (TDD), originally proposed by Kent Beck, is the idea of writing the tests before writing the software that will be tested.

  • This has proven to be a very popular development workflow
  • It forces developers to think about what they want their code to do before writig it
  • By writing unit tests before coding, developers are essentially defining an API of the code to be developed
  • A spin-off concept, called behavior-driven development (BDD) aims to make unit testing more human-focused by renaming some unit testing tasks, but is technically the same process

Beck himself is not dogmatic, and proposes the following (don't take this as dogma either):

  • not writing tests first if you're trying out a new idea you want to bang out but you're not sure will end up working. Once the idea becomes established, he recommends switching to TDD.
  • not worrying about labels like TDD, BDD, or UAT and simply test in a way and at a scale that makes sense for the problem you're trying to solve
  • be distrustful of dogmatic people - focus on people who tell stories


A unit is the smallest testable part of an application

  • e.g., a single function, object, or class

Unit tests guarantee that each unit is working as expected. This can be valuable when creating code in a large project, since it provides some sort of security in knowing that:

  • each unit of code has known inputs and outputs, which for all intents and purposes means that there is an API to the code
  • refactoring of existing code to make it better organized and efficient will not accidentally introduce errors, since any such errors that change the behavior of the code will be immediately detected by the tests (called regression tests in this context)

Test cases

A formal unit test case consists of a known input and an expected output.

  • The known input is sometimes referred to as the pre-condition of a test case
  • The expected output is sometimes referred to as the post-condition of a test case

There should be at least two test cases for each unit

  • one that tests the expected output when given valid input
  • one that tests the expected output when given invalid input

Features of a good unit test


Good unit tests are TRIP-y:

  • Thorough
    • Although bugs tend to cluster around certain regions of the code, ensure that you test all key paths and scenarios your code might be used in
    • Use code coverage tools, if you must, to discover untested regions of the codebase
    • Tests should be testing something useful that allows you to differentiate good code from bad
  • Repeatable
    • Tests should produce the same results each time
    • Tests should not rely on uncontrollable parameters
  • Independent
    • Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
    • Tests should be isolated and not rely on each other. No assumptions about order of test execution. Ensure a 'clean slate' before each test by performing setup/teardown appropriately
    • If you find that any given test is too large to thoroughly test, you should probably refactor code to create smaller units.
  • Professional
    • Follow the same standards of good code design for your test code.
    • In the long run you'll have as much test code as production code, if not more
    • Simple projects might have a 1-to-1 ratio of test code to production code; very complicated systems may have as much as 5-to-1 ratio of testing code to production code.
    • Well factored methods and classes with intuitive descriptive names, no redundancy, etc.

Fast and automatic

And in continuous integration, it is also important for unit tests to be:

  • Automatic
    • Invoking of tests as well as checking results for PASS/FAIL should be automatic
    • This saves time, money, and effort
  • Fast
    • Most tests complete very quickly, but there may be a few that take a very long time to complete
    • Any test that takes more than a second to complete is an indicator that the code may need to be refactored, although this isn't a guarantee if there's a good reason for the duration
    • The longer the test suite takes for a run... the less frequently it will be run... The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.


Test only code unique to the system, and not code from external dependencies like databases or 3rd party frameworks or libraries.

  • unit testing frameworks offer tools to simulate dependencies, rather than actually testing them, since they are outside your control
  • do not test the platform code itself. For example, don't test code from node.js itself when testing a node.js app
  • unit tests should always create their own test data to execute against. That way, you can be confident that your tests aren’t dependent upon the state of a particular environment and will be repeatable even if they are executed in a different environment from which they were written.


  • Most unit testing frameworks provide a mechanism for creating Mock objects that simulate real dependencies that are external to the system being tested.
  • These mock objects can simply hard-code return values within their methods, rather than actually performing any complicated or time-consuming processes that may distract from the test goals
  • These mocks, and any other changes to the code that you do just for testing, should be set up within the testing code, and do not require changes to the production code.

Setup and teardown

Most unit testing frameworks provide the ability to setup the scene for tests before they run, as well as cleanup, or teardown, of those setups after the test completes.

  • the term, "test fixture", is used to describe a fixed state in the code that is setup before a test is run
  • usually code to perform setup goes into a method called setUp()
  • code to teardown that setup goes into a method named teardDown()

Setup and teardown may include tasks like the following:

  • instantiating and destroying any objects required to run each test

You usually have tests that require objects to exist in a particular state prior to running the tests. In these cases, you set up these objects prior to running tests. This might include:

  • instantiating objects
  • initializing databases or Mock database
  • setting up any Mock objects to replace external dependencies
  • preparing any additional variables or values

And teardown will consist of undoing all of the setup from the setup method.


Tests should tell a story that is easily understood by others

  • document unit tests as you would any code
  • keep the same variable conventions as you would any code


Unit test code should be kept separate from production code

  • usually placed into a separate directory at least, or a totally separate project at most


Unit tests are made up of assertions. Unit testing frameworks, which setup and perform the tests, often work in tandem with assertion libraries, which define which assertion types are available.

  • Assertions verify how correct your programs are
  • An assertion is a statement that a particular predicate is going to be true - this is either true or false.
    • If false, the assertion fails and throws an error about what went wrong and where that happened in code

Assertion methods

The following assertion methods are included in the assertion library within JUnit.

  • Since JUnit is the standard of reference for most other unit testing frameworks, simnilar methods will be found in most other unit testing frameworks' assertion libraries.
  • The following are some of the most commonly used assertion methods. Assertion libraries typically contain more assertions than those listed here.

Verifying shallow equality:

  • assertEquals(primitive/object expected, primitive/object actual) - verifies that the two arguments are indeed equal
  • assertNotEquals(primitive/object expected, primitive/object actual) - verifies that the two arguments are indeed not equal
  • these checks shallow equality, like == , for example testing whether a given function call returns a particular value or not

Verifying deeper equality:

  • assertStrictEquals(object expected, object actual) - checks deep equality of all properties of the objects, like ===
  • assertArrayEquals(expectedArray, actualArray) - verifies that all values in each array are the same

Verifying veracity:

  • assertTrue(boolean condition) - verifies that the boolean condition is indeed true
  • assertFalse(boolean condition) - verifies that the boolean condition is indeed false

Verifying existence:

  • assertNull(Object object) - verifies that the object is indeed null, meaning there is no actual object being referenced
  • assertNotNull(Object object) - verifies that the object is not null, meaning there is an actual object being referenced

Verifying sameness:

  • assertSame(object1, object2) - verifies that the two object references refer to the same object
  • assertNotSame(object1, object2) - verifies that the two object references refer to different objects

Verifying exceptions:

  • the @Throws annotation, while not a method, provides a way of looking for a specific thrown error when executing a part of code

JUnit example

The following shows a simple example of a single text cast that outputs failure messages if the assertions about the behavior of a MyClass object fail.

  • Note the @Test annotation that informs JUnit that this method is a test method
import static org.junit.jupiter.api.Assertions.assertEquals;

import org.junit.jupiter.api.Test;

public class MyFirstTest {

    public void multiplicationOfZeroIntegersShouldReturnZero() {
        MyClass tester = new MyClass(); // MyClass is tested

        // assert statements
        assertEquals(0, tester.multiply(10, 0), "10 x 0 must be 0");
        assertEquals(0, tester.multiply(0, 10), "0 x 10 must be 0");
        assertEquals(0, tester.multiply(0, 0), "0 x 0 must be 0");


xUnit testing frameworks allow developers to bundle up multiple test scripts into a Suite

  • This allows you to run a bunch of different test scripts at once with a single command

JUnit example

The following example shows a Suite consisting of two test classes.

  • Note the use of two annotations, @RunWith and @SuiteClasses, to indicate how the Suite should be run and what classes are included in the Suite.
  • Adding more test classes to the suite would be as simple as including them into the array argument to the @SuiteClasses annotation.
package edu.nyu.cs.fb1258.junit.first;

import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;

        MySecondClassTest.class })

public class AllTests {



What links here