Back to Articles
CI/CDIntermediate

GitHub Actions for Quality Engineering

Practical guide to using GitHub Actions for test automation and quality checks

7 min read
...
github-actionsci-cdautomationworkflows
Banner for GitHub Actions for Quality Engineering

Introduction

GitHub Actions provides powerful CI/CD capabilities built directly into GitHub. This guide covers how to leverage GitHub Actions for test automation, quality gates, and continuous testing workflows.

GitHub Actions Basics

Core Concepts

1. Workflows - Automated processes defined in YAML 2. Events - Triggers that start workflows (push, pull_request, schedule) 3. Jobs - Set of steps that execute on a runner 4. Steps - Individual tasks (run commands, use actions) 5. Runners - Servers that run your workflows

Workflow Structure

name: Test Automation Pipeline
 
on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]
 
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run tests
        run: npm test

Setting Up Test Automation

Basic Test Workflow

name: Run Tests
 
on: [push, pull_request]
 
jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
      - name: Checkout code
        uses: actions/checkout@v3
      
      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run unit tests
        run: npm run test:unit
      
      - name: Run integration tests
        run: npm run test:integration

Multi-Language Testing

name: Multi-Language Tests
 
on: [push]
 
jobs:
  node-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
      - run: npm ci
      - run: npm test
  
  java-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-java@v3
        with:
          java-version: '17'
          distribution: 'temurin'
      - run: mvn clean test
  
  python-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.11'
      - run: pip install -r requirements.txt
      - run: pytest

E2E Testing with Playwright

name: E2E Tests
 
on:
  push:
    branches: [ main ]
  pull_request:
 
jobs:
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      
      - uses: actions/setup-node@v3
        with:
          node-version: 18
      
      - name: Install dependencies
        run: npm ci
      
      - name: Install Playwright Browsers
        run: npx playwright install --with-deps
      
      - name: Run Playwright tests
        run: npx playwright test
      
      - name: Upload test results
        uses: actions/upload-artifact@v3
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 30

API Testing Workflows

name: API Tests
 
on:
  push:
    branches: [ main, develop ]
  schedule:
    - cron: '0 */6 * * *'  # Every 6 hours
 
jobs:
  api-tests:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      
      - name: Run Postman Collection
        uses: matt-ball/newman-action@master
        with:
          collection: tests/api/collection.json
          environment: tests/api/environment.json
          reporters: cli,htmlextra
      
      - name: Upload Newman Report
        uses: actions/upload-artifact@v3
        if: always()
        with:
          name: newman-report
          path: newman/

Parallel Test Execution

name: Parallel Tests
 
on: [push]
 
jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        shard: [1, 2, 3, 4]
    
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
      - run: npm ci
      
      - name: Run tests (shard ${{ matrix.shard }}/4)
        run: npx playwright test --shard=${{ matrix.shard }}/4

Browser Matrix Testing

name: Cross-Browser Tests
 
on: [push, pull_request]
 
jobs:
  test:
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, windows-latest, macos-latest]
        browser: [chromium, firefox, webkit]
    
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
      
      - name: Install dependencies
        run: npm ci
      
      - name: Install Playwright
        run: npx playwright install --with-deps ${{ matrix.browser }}
      
      - name: Run tests on ${{ matrix.browser }}
        run: npx playwright test --project=${{ matrix.browser }}

Quality Gates

Test Coverage Enforcement

name: Test Coverage
 
on: [push, pull_request]
 
jobs:
  coverage:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
      - run: npm ci
      
      - name: Run tests with coverage
        run: npm run test:coverage
      
      - name: Check coverage threshold
        run: |
          COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
          if (( $(echo "$COVERAGE < 80" | bc -l) )); then
            echo "Coverage $COVERAGE% is below 80% threshold"
            exit 1
          fi
      
      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v3

Linting and Code Quality

name: Code Quality
 
on: [push, pull_request]
 
jobs:
  lint:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
      - run: npm ci
      
      - name: Run ESLint
        run: npm run lint
      
      - name: Run Prettier check
        run: npm run format:check
      
      - name: TypeScript check
        run: npm run type-check

Advanced Patterns

Conditional Test Execution

name: Smart Test Execution
 
on: [pull_request]
 
jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      backend: ${{ steps.filter.outputs.backend }}
      frontend: ${{ steps.filter.outputs.frontend }}
    
    steps:
      - uses: actions/checkout@v3
      - uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            backend:
              - 'api/**'
              - 'server/**'
            frontend:
              - 'src/**'
              - 'components/**'
  
  backend-tests:
    needs: detect-changes
    if: needs.detect-changes.outputs.backend == 'true'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm run test:backend
  
  frontend-tests:
    needs: detect-changes
    if: needs.detect-changes.outputs.frontend == 'true'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm run test:frontend

Reusable Workflows

# .github/workflows/reusable-test.yml
name: Reusable Test Workflow
 
on:
  workflow_call:
    inputs:
      node-version:
        required: true
        type: string
      test-command:
        required: true
        type: string
 
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: ${{ inputs.node-version }}
      - run: npm ci
      - run: ${{ inputs.test-command }}
# .github/workflows/main.yml
name: Main Workflow
 
on: [push]
 
jobs:
  unit-tests:
    uses: ./.github/workflows/reusable-test.yml
    with:
      node-version: '18'
      test-command: 'npm run test:unit'
  
  e2e-tests:
    uses: ./.github/workflows/reusable-test.yml
    with:
      node-version: '18'
      test-command: 'npm run test:e2e'

Test with Docker Services

name: Integration Tests with Database
 
on: [push]
 
jobs:
  test:
    runs-on: ubuntu-latest
    
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_PASSWORD: postgres
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432
      
      redis:
        image: redis:7
        ports:
          - 6379:6379
    
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      
      - name: Run integration tests
        run: npm run test:integration
        env:
          DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test
          REDIS_URL: redis://localhost:6379

Notifications and Reporting

Slack Notifications

name: Tests with Slack Notification
 
on: [push]
 
jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm test
      
      - name: Slack notification on failure
        if: failure()
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            {
              "text": "Tests failed on ${{ github.repository }}",
              "blocks": [
                {
                  "type": "section",
                  "text": {
                    "type": "mrkdwn",
                    "text": "❌ Tests failed\n*Repository:* ${{ github.repository }}\n*Branch:* ${{ github.ref }}\n*Commit:* ${{ github.sha }}"
                  }
                }
              ]
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

Test Results Publishing

name: Publish Test Results
 
on: [push]
 
jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm test -- --reporter=junit --reporter-options=output=test-results.xml
      
      - name: Publish Test Results
        uses: EnricoMi/publish-unit-test-result-action@v2
        if: always()
        with:
          files: test-results.xml

Security and Secrets

Managing Test Credentials

name: Tests with Secrets
 
on: [push]
 
jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      
      - name: Run tests
        run: npm test
        env:
          API_KEY: ${{ secrets.TEST_API_KEY }}
          DATABASE_URL: ${{ secrets.TEST_DATABASE_URL }}
          JWT_SECRET: ${{ secrets.TEST_JWT_SECRET }}

Environment Protection

name: Production Tests
 
on:
  workflow_dispatch:
 
jobs:
  prod-smoke-tests:
    runs-on: ubuntu-latest
    environment: production  # Requires approval
    
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm run test:smoke
        env:
          BASE_URL: ${{ secrets.PROD_BASE_URL }}

Best Practices

1. Optimize Workflow Performance

  • Cache dependencies (npm, maven, pip)
  • Use matrix strategy for parallel execution
  • Only run necessary tests based on changes

2. Fail Fast Strategy

strategy:
  fail-fast: true  # Stop all jobs if one fails
  matrix:
    browser: [chrome, firefox, safari]

3. Timeout Protection

jobs:
  test:
    timeout-minutes: 30  # Kill job after 30 minutes

4. Artifact Management

  • Upload test reports for analysis
  • Store screenshots/videos on failure
  • Set appropriate retention periods

Common Workflows for QE

1. PR Quality Gate

name: PR Quality Gate
 
on:
  pull_request:
 
jobs:
  quality-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm run lint
      - run: npm run test:unit
      - run: npm run test:coverage

2. Nightly Full Test Suite

name: Nightly Full Tests
 
on:
  schedule:
    - cron: '0 2 * * *'  # 2 AM daily
 
jobs:
  full-suite:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm run test:all

3. Release Smoke Tests

name: Release Smoke Tests
 
on:
  release:
    types: [published]
 
jobs:
  smoke-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: npm ci
      - run: npm run test:smoke

Conclusion

GitHub Actions provides a robust platform for implementing comprehensive test automation and quality gates. Start with basic workflows and gradually add complexity as needed.

Next Steps

  1. Create your first test workflow in .github/workflows/
  2. Set up matrix testing for multiple browsers
  3. Implement quality gates (coverage, linting)
  4. Configure notifications for test failures
  5. Optimize with caching and parallel execution

Comments (0)

Loading comments...