Test Reporting & Dashboards
Create meaningful test reports and dashboards for stakeholders
Introduction
Test reports aren't just about showing pass/fail numbers. Great test reports tell a story, provide actionable insights, and build confidence in your releases. This guide will teach you how to create reports that stakeholders actually want to read.
The Purpose of Test Reporting
Good test reports serve multiple audiences:
- Developers: Need details on what failed and why
- Product Managers: Want to know if features are ready
- QE Team: Track flakiness, trends, and coverage
- Leadership: Need high-level confidence metrics
What Makes a Good Test Report?
1. Actionable Insights
Bad Report:
Tests Run: 1,247
Passed: 1,198
Failed: 49
Success Rate: 96.07%Good Report:
🔴 BLOCKING: 3 critical payment tests failing
🟡 WARNING: 12 tests flaky (>50% pass rate last week)
🟢 PASSED: All authentication tests (156/156)
📊 Trends:
- Success rate down 2.3% from yesterday (98.4% → 96.1%)
- New failures in: Payment Service (3), Checkout Flow (2)
- Environment: Database latency +150ms (investigate)
🎯 Action Items:
1. Fix payment gateway timeout (TICKET-123)
2. Investigate checkout race condition (TICKET-124)
3. Scale up staging DB to reduce latency2. Visual Clarity
Use colors, charts, and formatting to make data scannable:
// HTML report with visual indicators
function generateTestSummary(results) {
const passRate = (results.passed / results.total) * 100;
const status = passRate >= 95 ? '🟢 PASSING' :
passRate >= 80 ? '🟡 UNSTABLE' :
'🔴 FAILING';
return `
<div class="summary ${getStatusClass(passRate)}">
<h2>${status}</h2>
<div class="metrics">
<div class="metric">
<span class="value">${results.passed}</span>
<span class="label">Passed</span>
</div>
<div class="metric ${results.failed > 0 ? 'failed' : ''}">
<span class="value">${results.failed}</span>
<span class="label">Failed</span>
</div>
<div class="metric">
<span class="value">${Math.round(passRate)}%</span>
<span class="label">Success Rate</span>
</div>
</div>
</div>
`;
}3. Historical Context
Show trends over time:
// Track metrics over time
const testMetrics = {
date: '2026-01-30',
total: 1247,
passed: 1198,
failed: 49,
skipped: 0,
duration: 1847, // seconds
coverage: 78.5, // percent
flaky_tests: ['test_checkout_race', 'test_payment_timeout'],
new_failures: ['test_new_feature_x'],
environment: 'staging'
};
// Store in database or time-series DB
await saveMetrics(testMetrics);
// Generate trend chart
const last7Days = await getMetrics({ days: 7 });
generateTrendChart(last7Days, ['passed', 'failed', 'duration']);Essential Metrics to Track
Test Execution Metrics
const executionMetrics = {
// Volume metrics
total_tests: 1247,
total_test_cases: 3521, // Including data-driven variations
// Outcome metrics
passed: 1198,
failed: 49,
skipped: 0,
blocked: 0,
// Performance metrics
total_duration: 1847, // seconds
avg_test_duration: 1.48, // seconds
slowest_test: 45.2, // seconds
// Reliability metrics
flaky_count: 12,
flaky_rate: 0.96, // percent
retry_count: 15,
// Coverage metrics
code_coverage: 78.5, // percent
api_coverage: 92.3, // percent
feature_coverage: 85.0 // percent
};Quality Metrics
const qualityMetrics = {
// Defect metrics
bugs_found: 5,
critical_bugs: 1,
bugs_escaped_to_prod: 0,
// Test effectiveness
defect_detection_rate: 95.2, // percent caught before prod
false_positive_rate: 2.1, // percent
// Build health
build_success_rate: 87.5, // percent
mean_time_to_recovery: 25, // minutes
// Automation metrics
automated_test_count: 1247,
manual_test_count: 53,
automation_rate: 95.9 // percent
};Building Test Reports
1. JUnit XML Reports
Most test frameworks generate JUnit XML:
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="API Test Suite" tests="10" failures="1" errors="0" time="12.345">
<testsuite name="Authentication Tests" tests="5" failures="0" time="5.123">
<testcase name="test_valid_login" classname="AuthTests" time="1.234"/>
<testcase name="test_invalid_password" classname="AuthTests" time="0.987"/>
</testsuite>
<testsuite name="Payment Tests" tests="5" failures="1" time="7.222">
<testcase name="test_successful_payment" classname="PaymentTests" time="2.345"/>
<testcase name="test_payment_timeout" classname="PaymentTests" time="4.567">
<failure message="Gateway timeout after 30s" type="TimeoutError">
Stack trace here...
</failure>
</testcase>
</testsuite>
</testsuites>2. HTML Reports with Mochawesome
// mocha with mochawesome reporter
// package.json
{
"scripts": {
"test:report": "mocha tests/**/*.spec.js --reporter mochawesome --reporter-options reportDir=reports,reportFilename=test-report"
}
}
// Configure mochawesome
const mochawesomeConfig = {
reportDir: 'reports',
reportFilename: 'test-report',
html: true,
json: true,
overwrite: false,
timestamp: 'isoDateTime',
showPassed: true,
showFailed: true,
showPending: true,
showSkipped: false,
charts: true,
inline: false
};3. Custom Report Generator
// custom-reporter.js
const fs = require('fs');
const path = require('path');
class CustomReporter {
constructor(runner) {
this.results = {
suites: [],
stats: {
tests: 0,
passes: 0,
failures: 0,
duration: 0,
start: new Date(),
end: null
},
failures: [],
flaky: []
};
this.setupListeners(runner);
}
setupListeners(runner) {
runner.on('start', () => {
this.results.stats.start = new Date();
});
runner.on('suite', (suite) => {
if (suite.title) {
this.currentSuite = {
title: suite.title,
tests: [],
duration: 0
};
}
});
runner.on('test end', (test) => {
this.results.stats.tests++;
const testResult = {
title: test.title,
duration: test.duration,
state: test.state,
error: test.err ? {
message: test.err.message,
stack: test.err.stack
} : null
};
if (test.state === 'passed') {
this.results.stats.passes++;
} else if (test.state === 'failed') {
this.results.stats.failures++;
this.results.failures.push(testResult);
}
// Track flaky tests (retry succeeded)
if (test._currentRetry > 0 && test.state === 'passed') {
this.results.flaky.push(testResult);
}
this.currentSuite.tests.push(testResult);
});
runner.on('suite end', (suite) => {
if (suite.title) {
this.results.suites.push(this.currentSuite);
}
});
runner.on('end', () => {
this.results.stats.end = new Date();
this.results.stats.duration = this.results.stats.end - this.results.stats.start;
this.generateReport();
});
}
generateReport() {
// Generate JSON report
const jsonPath = path.join('reports', 'test-results.json');
fs.writeFileSync(jsonPath, JSON.stringify(this.results, null, 2));
// Generate HTML report
const html = this.generateHTML();
const htmlPath = path.join('reports', 'test-report.html');
fs.writeFileSync(htmlPath, html);
console.log(`\n📊 Reports generated:`);
console.log(` JSON: ${jsonPath}`);
console.log(` HTML: ${htmlPath}`);
}
generateHTML() {
const { stats, failures, flaky } = this.results;
const passRate = ((stats.passes / stats.tests) * 100).toFixed(1);
const status = passRate >= 95 ? 'passing' : passRate >= 80 ? 'warning' : 'failing';
return `
<!DOCTYPE html>
<html>
<head>
<title>Test Report - ${new Date().toISOString()}</title>
<style>
body { font-family: system-ui, -apple-system, sans-serif; margin: 0; padding: 20px; background: #f5f5f5; }
.container { max-width: 1200px; margin: 0 auto; }
.header { background: white; padding: 30px; border-radius: 8px; margin-bottom: 20px; }
.status { font-size: 32px; font-weight: bold; margin-bottom: 20px; }
.status.passing { color: #22c55e; }
.status.warning { color: #eab308; }
.status.failing { color: #ef4444; }
.metrics { display: grid; grid-template-columns: repeat(4, 1fr); gap: 20px; }
.metric { background: #f9fafb; padding: 20px; border-radius: 6px; text-align: center; }
.metric .value { font-size: 36px; font-weight: bold; display: block; }
.metric .label { color: #6b7280; font-size: 14px; margin-top: 8px; display: block; }
.metric.failed .value { color: #ef4444; }
.metric.flaky .value { color: #eab308; }
.section { background: white; padding: 30px; border-radius: 8px; margin-bottom: 20px; }
.test-item { border-left: 4px solid #22c55e; padding: 12px; margin: 8px 0; background: #f9fafb; }
.test-item.failed { border-left-color: #ef4444; }
.test-item.flaky { border-left-color: #eab308; }
.error { color: #ef4444; margin-top: 8px; font-family: monospace; font-size: 12px; }
.stack { color: #6b7280; margin-top: 4px; font-size: 11px; overflow-x: auto; }
</style>
</head>
<body>
<div class="container">
<div class="header">
<div class="status ${status}">${status.toUpperCase()}</div>
<div class="metrics">
<div class="metric">
<span class="value">${stats.passes}</span>
<span class="label">Passed</span>
</div>
<div class="metric ${stats.failures > 0 ? 'failed' : ''}">
<span class="value">${stats.failures}</span>
<span class="label">Failed</span>
</div>
<div class="metric ${flaky.length > 0 ? 'flaky' : ''}">
<span class="value">${flaky.length}</span>
<span class="label">Flaky</span>
</div>
<div class="metric">
<span class="value">${passRate}%</span>
<span class="label">Pass Rate</span>
</div>
</div>
</div>
${failures.length > 0 ? `
<div class="section">
<h2>❌ Failed Tests (${failures.length})</h2>
${failures.map(test => `
<div class="test-item failed">
<strong>${test.title}</strong> <span style="color: #6b7280;">(${test.duration}ms)</span>
${test.error ? `
<div class="error">${test.error.message}</div>
<pre class="stack">${test.error.stack}</pre>
` : ''}
</div>
`).join('')}
</div>
` : ''}
${flaky.length > 0 ? `
<div class="section">
<h2>⚠️ Flaky Tests (${flaky.length})</h2>
<p>These tests failed initially but passed on retry. Investigate for reliability.</p>
${flaky.map(test => `
<div class="test-item flaky">
<strong>${test.title}</strong> <span style="color: #6b7280;">(${test.duration}ms)</span>
</div>
`).join('')}
</div>
` : ''}
</div>
</body>
</html>
`;
}
}
module.exports = CustomReporter;Dashboard Tools
1. Allure Report
Beautiful test reports with history:
# Install Allure
npm install --save-dev allure-commandline
# Generate report
allure generate ./allure-results --clean -o ./allure-report
allure open ./allure-report// Configure with Mocha
const allure = require('allure-mocha/runtime');
describe('Payment Tests', () => {
it('should process payment successfully', async function() {
allure.epic('E-commerce');
allure.feature('Payments');
allure.story('Credit Card Payment');
allure.severity('critical');
// Add attachment
allure.attachment('Request Payload', JSON.stringify(payload), 'application/json');
const result = await processPayment(payload);
allure.attachment('Response', JSON.stringify(result), 'application/json');
expect(result.status).to.equal('success');
});
});2. ReportPortal
Enterprise test reporting platform:
# reportportal.yml
rp.endpoint: https://reportportal.example.com
rp.uuid: your-uuid-here
rp.launch: API Test Suite
rp.project: ecommerce
rp.attributes:
- environment:staging
- suite:api// reportportal.config.js
const RPClient = require('@reportportal/client-javascript');
const rpConfig = {
token: process.env.RP_TOKEN,
endpoint: 'https://reportportal.example.com/api/v1',
project: 'ecommerce',
launch: 'API Test Suite',
description: 'Automated API tests for e-commerce platform',
attributes: [
{ key: 'environment', value: 'staging' },
{ key: 'build', value: process.env.BUILD_NUMBER }
]
};3. Grafana Dashboards
Visualize test metrics over time:
// Send test metrics to Prometheus
const client = require('prom-client');
// Define metrics
const testCounter = new client.Counter({
name: 'test_runs_total',
help: 'Total number of test runs',
labelNames: ['suite', 'status']
});
const testDuration = new client.Histogram({
name: 'test_duration_seconds',
help: 'Test execution duration',
labelNames: ['suite'],
buckets: [1, 5, 10, 30, 60, 120, 300]
});
const coverageGauge = new client.Gauge({
name: 'code_coverage_percent',
help: 'Code coverage percentage',
labelNames: ['suite']
});
// After test run
testCounter.inc({ suite: 'api', status: 'passed' }, passedCount);
testCounter.inc({ suite: 'api', status: 'failed' }, failedCount);
testDuration.observe({ suite: 'api' }, durationInSeconds);
coverageGauge.set({ suite: 'api' }, coveragePercent);
// Expose metrics endpoint for Prometheus
app.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType);
res.end(await client.register.metrics());
});Grafana Dashboard JSON:
{
"dashboard": {
"title": "Test Execution Dashboard",
"panels": [
{
"title": "Test Success Rate",
"targets": [{
"expr": "rate(test_runs_total{status=\"passed\"}[5m]) / rate(test_runs_total[5m]) * 100"
}],
"type": "graph"
},
{
"title": "Test Duration Trends",
"targets": [{
"expr": "histogram_quantile(0.95, test_duration_seconds_bucket)"
}],
"type": "graph"
},
{
"title": "Code Coverage",
"targets": [{
"expr": "code_coverage_percent"
}],
"type": "gauge"
}
]
}
}4. Custom Slack Reports
// Send test summary to Slack
async function sendSlackReport(results) {
const passRate = (results.passed / results.total * 100).toFixed(1);
const status = passRate >= 95 ? '🟢' : passRate >= 80 ? '🟡' : '🔴';
const blocks = [
{
type: 'header',
text: {
type: 'plain_text',
text: `${status} Test Run Complete - ${passRate}% Passed`
}
},
{
type: 'section',
fields: [
{ type: 'mrkdwn', text: `*Total:*\n${results.total}` },
{ type: 'mrkdwn', text: `*Passed:*\n${results.passed}` },
{ type: 'mrkdwn', text: `*Failed:*\n${results.failed}` },
{ type: 'mrkdwn', text: `*Duration:*\n${results.duration}s` }
]
}
];
// Add failure details
if (results.failed > 0) {
blocks.push({
type: 'section',
text: {
type: 'mrkdwn',
text: `*Failed Tests:*\n${results.failures.map(f => `• ${f.name}`).join('\n')}`
}
});
}
// Add action buttons
blocks.push({
type: 'actions',
elements: [
{
type: 'button',
text: { type: 'plain_text', text: 'View Full Report' },
url: results.reportUrl
},
{
type: 'button',
text: { type: 'plain_text', text: 'View Build' },
url: results.buildUrl
}
]
});
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ blocks })
});
}CI/CD Integration
GitHub Actions
name: Test & Report
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Tests
run: npm test
- name: Generate Report
if: always()
run: npm run test:report
- name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: 'reports/junit.xml'
- name: Upload HTML Report
uses: actions/upload-artifact@v3
if: always()
with:
name: test-report
path: reports/
- name: Comment PR with Results
uses: actions/github-script@v6
if: github.event_name == 'pull_request'
with:
script: |
const fs = require('fs');
const results = JSON.parse(fs.readFileSync('reports/test-results.json'));
const passRate = (results.passed / results.total * 100).toFixed(1);
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `## Test Results\n\n` +
`✅ Passed: ${results.passed}\n` +
`❌ Failed: ${results.failed}\n` +
`📊 Pass Rate: ${passRate}%\n\n` +
`[View Full Report](${process.env.REPORT_URL})`
});Jenkins
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Report') {
steps {
// Publish JUnit results
junit 'reports/junit.xml'
// Publish HTML report
publishHTML([
reportDir: 'reports',
reportFiles: 'test-report.html',
reportName: 'Test Report'
])
// Send Slack notification
slackSend(
color: currentBuild.result == 'SUCCESS' ? 'good' : 'danger',
message: "Test Results: ${env.JOB_NAME} #${env.BUILD_NUMBER}\n" +
"Status: ${currentBuild.result}\n" +
"Report: ${env.BUILD_URL}Test_20Report"
)
}
}
}
post {
always {
// Archive test results
archiveArtifacts artifacts: 'reports/**/*', fingerprint: true
// Send email on failure
emailext(
subject: "Test Results: ${env.JOB_NAME} - ${currentBuild.result}",
body: readFile('reports/email-summary.html'),
mimeType: 'text/html',
to: '${DEFAULT_RECIPIENTS}'
)
}
}
}Best Practices
1. Make Reports Scannable
Use visual hierarchy:
- Critical info first: Pass/fail status at top
- Use color: Red for failures, yellow for warnings, green for success
- Group related info: Suites, tags, categories
- Highlight actionable items: What needs fixing now
2. Provide Context
Include environmental details:
const reportContext = {
timestamp: new Date().toISOString(),
environment: process.env.TEST_ENV,
branch: process.env.GIT_BRANCH,
commit: process.env.GIT_COMMIT,
build_number: process.env.BUILD_NUMBER,
triggered_by: process.env.BUILD_USER,
test_type: 'API Integration',
parallel_workers: 4
};3. Track Trends
Store historical data:
CREATE TABLE test_runs (
id SERIAL PRIMARY KEY,
timestamp TIMESTAMP DEFAULT NOW(),
total INTEGER,
passed INTEGER,
failed INTEGER,
skipped INTEGER,
duration INTEGER,
coverage DECIMAL,
environment VARCHAR(50),
branch VARCHAR(100),
commit VARCHAR(40)
);
-- Query for trends
SELECT
DATE(timestamp) as date,
AVG(passed::FLOAT / total * 100) as avg_pass_rate,
AVG(duration) as avg_duration
FROM test_runs
WHERE timestamp > NOW() - INTERVAL '30 days'
GROUP BY DATE(timestamp)
ORDER BY date;4. Categorize Failures
Group by failure type:
function categorizeFailures(failures) {
const categories = {
assertions: [],
timeouts: [],
network: [],
data: [],
unknown: []
};
failures.forEach(failure => {
if (failure.error.message.includes('expected')) {
categories.assertions.push(failure);
} else if (failure.error.message.includes('timeout')) {
categories.timeouts.push(failure);
} else if (failure.error.message.includes('ECONNREFUSED')) {
categories.network.push(failure);
} else if (failure.error.message.includes('not found')) {
categories.data.push(failure);
} else {
categories.unknown.push(failure);
}
});
return categories;
}Stakeholder-Specific Views
For Developers
Show technical details:
## Failed Tests - Developer View
### test_payment_processing
**Error**: AssertionError: expected 201 to equal 200
**Location**: tests/api/payment.spec.js:45
**Duration**: 2.3s
**Retries**: 2/3
**Request:**
```json
POST /api/payments
{
"amount": 99.99,
"currency": "USD"
}Response:
{
"error": "Invalid merchant ID"
}Stack Trace:
at Context.<anonymous> (tests/api/payment.spec.js:45:28)
at processTicksAndRejections (internal/process/task_queues.js:93:5)Recent Changes:
- Commit abc123: "Update payment validation" (John Doe, 2 hours ago)
### For Product Managers
Show feature readiness:
```markdown
## Feature Readiness Dashboard
### ✅ Ready for Release
- User Authentication (100% passing, 45 tests)
- Product Search (100% passing, 32 tests)
- Shopping Cart (100% passing, 28 tests)
### ⚠️ Needs Attention
- Checkout Flow (92% passing, 2 failures)
- Payment timeout issue (investigating)
- Promo code validation (fix in progress)
### 🔴 Blocked
- New Rewards Program (50% passing, 15 failures)
- API integration incomplete
- Database schema missingFor Leadership
Show high-level metrics:
## Quality Metrics - January 2026
📊 **Overall Health: 96.5%** (+1.2% from last month)
### Key Metrics
- Total Tests: 1,247 (+52 new)
- Automation Rate: 95.9% (+2.1%)
- Avg Build Time: 8.5 min (-1.2 min)
- Flaky Test Rate: 0.96% (-0.5%)
### Risk Assessment
🟢 **LOW RISK** for upcoming release
- All critical paths covered
- No blocking issues
- Performance within SLANext Steps
- Choose a reporting tool that fits your stack
- Define key metrics for your team
- Set up automated reports in CI/CD
- Create dashboards for different stakeholders
- Establish baselines for trends
- Schedule regular report reviews with team
Related Articles
- "Monitoring & Observability for QE" - Track production metrics
- "Test Automation Frameworks" - Generate reportable tests
- "CI/CD Pipeline Testing" - Integrate reporting
- "Metrics That Matter" - Choose the right KPIs
- "Communicating Quality" - Present results effectively
Conclusion
Effective test reporting is about communication, not just data collection. Great reports:
- Tell a clear story
- Provide actionable insights
- Show trends over time
- Serve multiple audiences
- Drive quality improvements
Start with simple reports showing pass/fail. Gradually add trends, categorization, and stakeholder-specific views. The goal is to make quality visible and quality decisions easy.
Remember: A report no one reads is worse than no report at all. Make yours count!
Comments (0)
Loading comments...