Test results and data are often difficult to interpret and troubleshoot -- and sometimes even difficult to find. Additionally, output can vary depending on the language layers or levels of abstraction. For example, server logs display low-level and detailed information that is helpful for debugging, but are cumbersome for users to read through, especially if they have limited technical skills. On the other hand, reporting interfaces that show high-level information such as executed steps, their outcomes and error messages are much more accessible to all user types. Additionally, many teams synchronize their test output with external tools such as project management databases.
Log OutputLog output, like in the example below, does contain an error message about incompatible step details when ensuring call forwarding. However, it might not be obvious how to read through the log it or what to look for. This is because such server logs include all log messages including protocol logs and debug information in addition to the test's steps and their outcome.
Therefore, it's important to know:
- What type of logs are generated by the test automation software
- If logs are available in a centralized location or only on the user's machine
- Who can access the logs
- If log data is easily parseable
Learn More: Documentation: intaQt Logs and Reports
Reporting and SummariesWhile having access to logs is critical for troubleshooting and debugging, having access to a quick overview of results from test cases and projects as a whole helps a lot with abstracting results and being able to get an immediate understanding of pass/fail rates, common errors and bug tracking.
Because of the user-friendliness and ease of abstraction, reporting services are extremely useful for communicating results to non-technical audiences who are interested in a project's outcome. For example, QiTASC's conQlude reporting service would depict the same log output that we showed above in a report like the one below:
Having the option to further define and group error categories is useful for several reasons:
- They give a general overview of common errors
- Categories that reflect user errors provide insight about training or re-training needs
- Error categories can be further grouped by additional criteria such as severity level, device types or test phase
Learn More: conQlude Reporting Service Tutorial
Synchronization with Other Tools and Data ExportIn addition to the accessing test output within a test automation tool, teams often need to synchronize results internally or to external customers. For example, internally a team may use JIRA to track their test executions and defects, while the customer uses HP ALM. Certain reporting tools, such as QiTASC's conQlude simplify this because it can automatically synchronize to both systems. Being able to download data as CSVs or other types of parseable data may also be important for data analysts and project managers who wish to track certain metrics or KPIs.
Learn More: Transparent Project Reporting with QiTASC conQlude