Time and again, I see test reports that omit important information. Therefore, in this blog post, I want to shed some light on the matter and offer some essential tips on test reports. This article is intended to provide assistance to anyone who creates or reviews test reports and may be unsure of what to look for.
General content in the test report
In the following table I give you a small checklist of the general content that should be included in a test report.
| Content for test report | Explanation of the content |
| Purpose of the test | The test report should include a description of the purpose of this test. For example, whether it is a preliminary test for a component or a system, and which specification is being tested against. Specifying the specification document and version is highly recommended. If the specification changes, it is immediately apparent which specification was tested against in this test. If a test plan exists, it should be referenced. |
| Test object | The test report must state which system or component is being tested. This can be a system, hardware, software, or other component. The goal is a unique designation that clearly identifies the test object. Ideally, a serial number is available. |
| version | A system usually has a unique version. Please ensure that information about software versions is listed. Software is usually a central component, and testing is not meaningful without specifying the versions. However, version information is also essential for hardware, mechanics, and systems. |
| configuration | Systems, components, and software can often be configured. The configuration must be known so that the test can be repeated under the same conditions. Examples of configuration include the language setting or other system settings. For hardware, any patches must be specified to make it clear which modifications have been made. |
| Test method | The test report should specify which test method should be used. Examples of test methods include:
|
| Test environment | This specifies the environment in which a test is to be performed. This could be, for example, a normal workplace at room temperature. However, there may also be special requirements regarding lighting conditions, climate conditions, ESD environment, or other requirements. These should then be defined in the test environment. |
| Test Equipment | The test may require measuring instruments, simulators, software, or other tools. The tools used must be listed in the test report. For example, a multimeter, with a test equipment number and calibration date. Other equipment must also be listed. Here, too, it is important to ensure that the equipment is clearly and comprehensibly listed (test equipment number or serial number, software version). Regarding measuring equipment, it is important to ensure that the required measurement accuracy can be maintained by the measuring equipment and that the necessary calibration is present and still valid. |
| Sample size with justification | Some tests can be performed using a single sample. Other tests require either multiple runs or multiple samples. It is important to state in the test report how many samples were used for testing. This information must be justified so that an auditor can understand it. For example, if a test checks whether a certain software behavior has been implemented, a sample size of 1 is usually sufficient, since the system is deterministic and the behavior has either been implemented or not. The situation is different if, for example, you are testing when a component breaks under mechanical pressure. In this case, multiple samples must be used to account for statistical fluctuations. |
Content of the test cases
In addition to the general information, the focus is on the actual test cases. The following table also shows the required content here.
| Content for test report | Explanation of the content |
| Test Case ID | Each test case should have a unique ID that allows it to be uniquely identified. Tools can simplify this process by automatically managing IDs for you. The IDs can then be linked to the requirement, and you automatically receive traceability. |
| precondition | Every test requires certain preconditions that must be met before the test can be executed. For example, you might want to test a system with a full battery. Or you might want the system to be in a specific device state with a specific configuration. This information must be defined in the preconditions. |
| Test steps | It's best to break a test down into test steps. Test steps allow you to describe a sequence of procedures that are then executed one after the other. Each test step should be given a name. You can either simply number it (1, 2, 3, ...) or assign it a meaningful name (e.g., power on, measure time, power off). |
| Procedure or description | Each test step is given a clear procedure or description that specifies what the tester or the automated test environment is to do in that test step. This description should be clear, understandable, and complete, and should not contain any implicit assumptions. Finally, the test must be understandable and feasible for people other than the test writer. |
| Acceptance criterion | The acceptance criterion is by far the most important element of a test plan or test report. It defines the criteria for accepting a test as passed. The acceptance criterion must match the underlying requirement. An example of an acceptance criterion: "The test is passed if the system displays the measured values on the display within 3 seconds of pressing the power button." |
| Result | The result contains the information about what the tester observed. It is a neutral description of how the system behaved during the test. Pass or fail is not the result, but the evaluation of the result! It must be possible to reproduce the result and compare it with the acceptance criterion. This is not possible with a simple "pass". Please also avoid simply copying the acceptance criterion. It is highly recommended that you provide evidence of the results, such as photos, videos, log data, or similar, and file these with the results and reference them. Objective evidence is very helpful both in audits and when you want to subsequently reproduce or repeat a test that has already been carried out. In my opinion, the effort required to document the results is limited, and the benefits clearly outweigh the disadvantages. |
| Comments and deviations | If acceptance criteria are not met during a test, or if other deviations or anomalies arise, these should be documented for each test step. It is recommended to use a bug tracking tool to manage anomalies and errors. Then, you can enter the error ID in the test report and use it to reference the bug tracking tool. |
| Date and signature | Unless the tests are performed with tools such as Polarion, it is recommended to date and sign each test. After all, especially with larger specifications, it may happen that the tests were not performed on the same day or by the same person. However, it is imperative to know who performed the tests and when. A tool can do this for you. Alternatively, you can enter them manually. |
| Summary or Assessment | At the very end of a test report, there should be a section summarizing the tests. This can be in the form of a prose description or a table that summarizes the test cases. It's important to provide a simple overview of whether there are any deviations, and if so, which ones, or whether all tests have been passed. |
I hope this checklist provides you with some guidance to improve the quality of your test reports. Test reports, in particular, require very accurate documentation.
Best regards
Goran Madzar
