๐Ÿ“š Series Navigation:
โ† Previous: Part 7 - Real-World Case Study
๐Ÿ‘‰ You are here: Part 8 - Modern QA Workflow
Next: Part 9 - Bug Reports That Get Fixed โ†’


Introduction: QA in the Age of Agile and DevOps

Welcome to Part 8! We've learned powerful testing techniques and seen them applied in a real case study. But here's a question that keeps QA engineers up at night:

"How do I actually DO all this in a 2-week sprint?"

The reality of modern software development:

  • โšก Deploys happen daily (or hourly!)
  • ๐Ÿ”„ Requirements change mid-sprint
  • ๐Ÿค Testing happens in parallel with development
  • ๐Ÿค– CI/CD pipelines run on every commit
  • ๐Ÿ“ฑ Multiple platforms, browsers, devices
  • โฐ "We need this tested by tomorrow"

Gone are the days when QA was a separate phase at the end. Today, quality is everyone's responsibility, and testing is continuous.

In this article, you'll learn:

  • โœ… How testing fits into modern Agile workflows
  • โœ… Shift-left testing in practice (not just theory)
  • โœ… Building effective CI/CD pipelines
  • โœ… Risk-based test prioritization for tight deadlines
  • โœ… Collaboration patterns that actually work
  • โœ… The Three Amigos and other ceremonies

Let's make modern QA actually work! ๐Ÿš€


๐Ÿ”„ The Modern Agile QA Workflow

The Traditional Waterfall Approach (What We Left Behind)

Requirements โ†’ Design โ†’ Development โ†’ QA Testing โ†’ Release
                                      โ†‘
                            [QA enters here, finds 100 bugs,
                             everyone blames QA, project delayed]

Problems:

  • โŒ Testing happens too late
  • โŒ Bugs are expensive to fix
  • โŒ QA becomes a bottleneck
  • โŒ No collaboration during development
  • โŒ Requirements already stale by testing time

The Modern Agile Approach (Where We Are Now)

graph LR A[Sprint Planning
QA Present] --> B[Three Amigos
Early Clarification] B --> C[Dev + QA
Parallel Work] C --> D[Continuous Testing
Every Commit] D --> E[Sprint Review
Demo + Retro] E --> F{Done?} F -->|Yes| G[Deploy to Prod] F -->|No| C style A fill:#dbeafe style B fill:#fef3c7 style C fill:#ddd6fe style D fill:#bbf7d0 style E fill:#fecaca style G fill:#86efac

What's Different:

  • โœ… QA involved from day 1
  • โœ… Testing starts before code is written
  • โœ… Continuous feedback loops
  • โœ… Everyone owns quality
  • โœ… Fast iterations

A Week in the Life of a Modern QA Engineer

Monday (Sprint Planning Day)

9:00 AM - Sprint Planning Meeting
โ”œโ”€ QA reviews user stories
โ”œโ”€ Asks clarifying questions
โ”œโ”€ Estimates testing effort
โ”œโ”€ Identifies risks
โ””โ”€ Commits to sprint capacity

11:00 AM - Story Refinement
โ”œโ”€ Deep dive on 2-3 complex stories
โ”œโ”€ QA identifies testability issues
โ”œโ”€ Team discusses acceptance criteria
โ””โ”€ QA flags dependencies

2:00 PM - Test Planning
โ”œโ”€ Create high-level test scenarios
โ”œโ”€ Identify automation candidates
โ”œโ”€ Plan test data needs
โ””โ”€ Update risk matrix

Tuesday-Wednesday (Early Sprint)

9:00 AM - Three Amigos Sessions
โ”œโ”€ Developer + PM + QA
โ”œโ”€ Walk through user story
โ”œโ”€ Clarify edge cases
โ””โ”€ Define acceptance criteria

10:00 AM - Test Case Design
โ”œโ”€ Write detailed test cases for story starting tomorrow
โ”œโ”€ Prepare test automation scripts
โ””โ”€ Set up test environments

2:00 PM - Early Testing
โ”œโ”€ Test stories marked "ready for testing"
โ”œโ”€ Pair with developers on unit tests
โ”œโ”€ Review code for testability
โ””โ”€ Provide quick feedback

Thursday-Friday (Mid-Late Sprint)

9:00 AM - Test Execution
โ”œโ”€ Execute test cases on completed stories
โ”œโ”€ Run automated regression suite
โ”œโ”€ Exploratory testing on new features
โ””โ”€ Log bugs, verify fixes

1:00 PM - Bug Triage
โ”œโ”€ Review bugs with dev team
โ”œโ”€ Prioritize fixes
โ”œโ”€ Verify bug fixes
โ””โ”€ Update test cases

4:00 PM - Sprint Preparation
โ”œโ”€ Update documentation
โ”œโ”€ Prepare demo scenarios
โ”œโ”€ Sign off completed stories
โ””โ”€ Identify technical debt

Key Differences from Traditional:

  • QA isn't waiting for "QA phase"
  • Testing happens in parallel with dev
  • Continuous communication
  • Faster feedback loops

โฌ…๏ธ Shift-Left Testing: Moving Quality Earlier

What is Shift-Left?

Traditional Approach:

Dev โ†’ Dev โ†’ Dev โ†’ Testing โ†’ Production
                    โ†‘
              [Find bugs here]
              [Expensive to fix!]

Shift-Left Approach:

Testing โ†’ Dev + Testing โ†’ Testing โ†’ Production
   โ†‘            โ†‘            โ†‘
[Design]    [Implementation] [Verification]
[Cheap]       [Moderate]      [Expensive]

The principle: The earlier you find defects, the cheaper they are to fix.

Cost of defects:

  • ๐Ÿ’ฐ Found during requirements: $1
  • ๐Ÿ’ฐ๐Ÿ’ฐ Found during development: $10
  • ๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ Found during QA: $100
  • ๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ Found in production: $1,000+

Shift-Left in Practice

1. Requirements Review (Day 0)

Traditional QA: "We'll test it when it's done."

Shift-Left QA:

Story Received: "User can export tasks"

QA Questions (Before Any Code):
โ“ What formats? (CSV, Excel, PDF?)
โ“ All tasks or filtered tasks?
โ“ Include completed tasks?
โ“ File size limits?
โ“ Email or download?
โ“ What if export fails?

Result: 6 ambiguities caught before a single line of code written!

Action Items:

  • โœ… Review every user story before sprint starts
  • โœ… Add testability acceptance criteria
  • โœ… Identify missing scenarios
  • โœ… Flag technical risks

2. Unit Test Collaboration (Day 1-3)

Traditional QA: "Unit tests are the developer's job."

Shift-Left QA:

QA + Developer Pair Programming Session:

Developer: "I'm writing validateEmail() function"
QA: "Great! Let's think about test cases:
     - Valid emails (with +, with subdomain)
     - Invalid emails (missing @, missing domain)
     - Null, empty string
     - SQL injection attempts
     - 320-character email (max length)"

Result: Developer writes comprehensive unit tests,
        QA provides edge cases developers might miss

Action Items:

  • โœ… Pair with developers on unit tests
  • โœ… Review test coverage reports
  • โœ… Suggest missing test cases
  • โœ… Share testing expertise

3. API Testing (Day 2-4)

Traditional QA: "Wait for UI to be done, then test through UI."

Shift-Left QA:

API Ready โ†’ QA Tests API Directly
โ”œโ”€ Faster feedback
โ”œโ”€ Backend bugs found immediately
โ”œโ”€ UI can be built in parallel
โ””โ”€ API tests become regression suite

Example:
POST /api/tasks
{
  "title": "Test Task",
  "description": "<script>alert('xss')</script>"
}

Response: 400 Bad Request
{"error": "Invalid characters in description"}

โœ… XSS protection verified before UI exists!

Action Items:

  • โœ… Test APIs as soon as endpoints exist
  • โœ… Use Postman/Insomnia/REST Assured
  • โœ… Automate API tests early
  • โœ… Don't wait for UI

4. Test Automation (Day 1-5)

Traditional QA: "Automate after manual testing proves it works."

Shift-Left QA:

Write automation scripts WHILE features are being developed:

Day 1: Feature branch created โ†’ Create test skeleton
Day 2: API ready โ†’ Automate API tests
Day 3: UI component ready โ†’ Automate happy path
Day 4: Feature complete โ†’ Add negative tests
Day 5: Ready for merge โ†’ Full automation suite ready

Result: Automation available immediately for regression!

Action Items:

  • โœ… Automate in parallel with development
  • โœ… Start with API-level tests
  • โœ… Add UI tests incrementally
  • โœ… Make automation part of Definition of Done

๐Ÿค– CI/CD Integration: Testing at the Speed of DevOps

The CI/CD Pipeline for QA

graph LR A[Code Commit] --> B[Build] B --> C[Unit Tests
10 sec] C --> D{Pass?} D -->|No| E[โŒ Notify Developer] D -->|Yes| F[Integration Tests
3 min] F --> G{Pass?} G -->|No| E G -->|Yes| H[Deploy to Dev] H --> I[Smoke Tests
5 min] I --> J{Pass?} J -->|No| E J -->|Yes| K[E2E Tests
20 min] K --> L{Pass?} L -->|No| E L -->|Yes| M[Deploy to Staging] M --> N[Manual Acceptance
As needed] N --> O[โœ… Deploy to Production] style A fill:#dbeafe style C fill:#86efac style F fill:#fbbf24 style I fill:#fbbf24 style K fill:#f87171 style O fill:#4ade80

Pipeline Design Principles

1. Fast Feedback

Goal: Developer knows within 5 minutes if they broke something

Pipeline Strategy:
โ”œโ”€ Unit tests: Must run in < 1 minute
โ”œโ”€ Integration tests: Must run in < 5 minutes
โ”œโ”€ E2E tests: Can run in background (20-30 min)
โ””โ”€ Full suite: Nightly or pre-release only

Example - TaskMaster 3000:
โ”œโ”€ Commit โ†’ Unit tests (15 sec) โœ…
โ”œโ”€ โ†’ Integration tests (3 min) โœ…
โ”œโ”€ โ†’ Deploy to dev (30 sec) โœ…
โ”œโ”€ โ†’ Smoke tests (2 min) โœ…
โ””โ”€ Total time to dev environment: < 7 minutes

2. Fail Fast

Run cheapest/fastest tests first:
1. Linting & static analysis (seconds)
2. Unit tests (seconds-minutes)
3. Integration tests (minutes)
4. E2E tests (minutes-hours)

Don't run expensive tests if cheap ones fail!

3. Parallel Execution

Instead of: Test 1 โ†’ Test 2 โ†’ Test 3 (30 min total)
Do: Test 1 โ•‘ Test 2 โ•‘ Test 3 (10 min total)

Tools:
- Selenium Grid (parallel browser tests)
- Jenkins: Parallel stages
- GitHub Actions: Matrix builds
- CircleCI: Parallelism option

4. Environment Management

Problem: "Works on my machine!" ๐Ÿคท

Solution: Containerization
โ”œโ”€ Docker for consistent environments
โ”œโ”€ Docker Compose for multi-service setups
โ”œโ”€ Kubernetes for production-like staging
โ””โ”€ Infrastructure as Code (Terraform)

Example docker-compose.yml:
version: '3'
services:
  app:
    build: .
    environment:
      - NODE_ENV=test
  db:
    image: postgres:14
  redis:
    image: redis:7

Test Types in CI/CD

Commit Stage (Every commit, < 5 min)

โœ… Linting (ESLint, Pylint)
โœ… Unit tests (900 tests, 15 sec)
โœ… Code coverage check (> 70%)
โœ… Security scan (npm audit, Snyk)

Acceptance Stage (Every PR, < 15 min)

โœ… Integration tests (450 tests, 8 min)
โœ… API contract tests (Pact)
โœ… Component tests
โœ… Build Docker image

Deployment Stage (After merge, < 30 min)

โœ… Deploy to dev environment
โœ… Smoke tests (20 critical paths)
โœ… E2E tests (120 tests, 20 min)
โœ… Performance tests (basic)

Release Stage (Scheduled/on-demand)

โœ… Full regression suite
โœ… Load testing
โœ… Security penetration tests
โœ… Cross-browser tests (BrowserStack)
โœ… Accessibility tests

โš–๏ธ Risk-Based Test Prioritization

The Harsh Reality

Manager: "We deploy in 2 hours. Can you test everything?"
QA: "No, but I can test what matters!"

This is where risk-based testing saves you.

Risk Assessment Matrix

Feature Biz User Comp Change Risk Test
Auth 5C 5A 3M 1S ๐Ÿ”ด14/20 1 - Full
Task Remind 4H 4M 4C 5N ๐Ÿ”ด17/20 1 - Full
Task Export 2L 3S 2S 1S ๐ŸŸข8/20 3 - Smoke
Theme Select 1M 2P 1S 1S ๐ŸŸข5/20 4 - Skip

Legend

Biz (Business Impact): 5C=Critical ๐Ÿ”ด, 4H=High ๐Ÿ”ด, 2L=Low ๐ŸŸข, 1M=Minimal ๐ŸŸข
User: 5A=All, 4M=Many, 3S=Some, 2P=Preference
Comp (Complexity): 1S=Simple, 3M=Moderate, 4C=Complex
Change (Change Freq): 1S=Stable, 5N=New
Risk: ๐Ÿ”ด=High, ๐ŸŸข=Low
Test (Testing Priority): 1=Full, 3=Smoke, 4=Skip

The 2-Hour Emergency Test Plan

Scenario: Critical hotfix needs to deploy in 2 hours. What do you test?

EMERGENCY TESTING PROTOCOL - 2 HOUR LIMIT

Hour 1: Critical Paths (80% of value)
โ”œโ”€ [15 min] Authentication flow
โ”‚   โ””โ”€ Login, logout, session management
โ”œโ”€ [15 min] Core task operations
โ”‚   โ””โ”€ Create, edit, complete, delete tasks
โ”œโ”€ [15 min] Data integrity
โ”‚   โ””โ”€ No data loss, corruption, or leaks
โ”œโ”€ [10 min] Payment (if applicable)
โ”‚   โ””โ”€ Checkout, payment processing
โ””โ”€ [5 min] Smoke test in production-like environment

Hour 2: Risk Areas (15% of value)
โ”œโ”€ [20 min] Areas changed by hotfix
โ”‚   โ””โ”€ Thorough testing of modified code
โ”œโ”€ [15 min] Integration points
โ”‚   โ””โ”€ External APIs, database, email
โ”œโ”€ [10 min] Security basics
โ”‚   โ””โ”€ SQL injection, XSS, auth bypass
โ”œโ”€ [10 min] Error handling
โ”‚   โ””โ”€ Graceful degradation
โ””โ”€ [5 min] Final sanity check

Skipped (5% of value):
โŒ Nice-to-have features
โŒ Cosmetic UI elements
โŒ Rarely-used functionality
โŒ Comprehensive browser testing

Document what was NOT tested!

Risk-Based Test Selection Algorithm

def prioritize_tests(tests, time_available_minutes):
    """
    Prioritize tests based on risk and time
    """
    for test in tests:
        test.score = (
            test.business_impact * 5 +
            test.user_impact * 4 +
            test.defect_history * 3 +
            test.code_complexity * 2 +
            test.change_frequency * 3
        )
    
    tests.sort(key=lambda t: t.score, reverse=True)
    
    selected_tests = []
    time_used = 0
    
    for test in tests:
        if time_used + test.execution_time <= time_available_minutes:
            selected_tests.append(test)
            time_used += test.execution_time
        else:
            break
    
    return selected_tests, tests[len(selected_tests):]

๐Ÿค Effective Collaboration Patterns

The Three Amigos Meeting

Who: Developer + Product Owner + QA
When: Before development starts
Duration: 30-60 minutes per story
Goal: Shared understanding

Agenda:

1. PO Explains the "Why" (5 min)
   โ”œโ”€ Business value
   โ”œโ”€ User problem being solved
   โ””โ”€ Success criteria

2. Developer Explains the "How" (10 min)
   โ”œโ”€ Technical approach
   โ”œโ”€ Dependencies
   โ”œโ”€ Risks
   โ””โ”€ Time estimate

3. QA Explains the "What If" (15 min)
   โ”œโ”€ Edge cases
   โ”œโ”€ Error scenarios
   โ”œโ”€ Testability concerns
   โ”œโ”€ Non-functional requirements
   โ””โ”€ Test strategy

4. Together: Refine Acceptance Criteria (15 min)
   โ”œโ”€ What does "done" look like?
   โ”œโ”€ What are we NOT building?
   โ”œโ”€ What can break?
   โ””โ”€ How will we test it?

5. Agreements & Actions (5 min)
   โ”œโ”€ Final acceptance criteria
   โ”œโ”€ Definition of Done
   โ”œโ”€ When testing can start
   โ””โ”€ Who does what

Example Three Amigos Output:

STORY: Export Tasks to CSV

BEFORE Three Amigos:
"User can export tasks to CSV"

AFTER Three Amigos:
โœ… Export filtered tasks (respects current filters)
โœ… Include: title, description, status, priority, due date
โœ… Format: CSV with UTF-8 encoding
โœ… File naming: tasks_export_YYYY-MM-DD_HH-MM.csv
โœ… Max 10,000 tasks per export
โœ… Download in browser (not email)
โœ… Error handling: Show error if > 10,000 tasks
โœ… Tested: Chrome, Firefox, Safari
โœ… Performance: Should complete in < 3 seconds

Questions Resolved:
Q: Include completed tasks? A: Yes, if they're in current filter
Q: Excel support? A: Future story, CSV only for now
Q: Email option? A: Future story
Q: Column order? A: Title, Status, Priority, Due Date, Description

Definition of Done:
โ–ก Feature implemented
โ–ก Unit tests written (>80% coverage)
โ–ก API tests automated
โ–ก Manual testing completed
โ–ก Works in Chrome, Firefox, Safari
โ–ก Documentation updated
โ–ก PO sign-off received

Daily Stand-ups (QA Perspective)

Bad Stand-up:

QA: "Yesterday I tested stuff. Today I'll test more stuff. No blockers."

Good Stand-up:

QA: "Yesterday I tested the reminder feature - found 3 bugs,
     2 are high priority (shared in Slack #bugs channel).
     
     Today I'm finishing reminder testing and starting on
     the export feature once the API is ready.
     
     Blocker: I need the staging environment fixed - it's been
     down since yesterday afternoon. Mike, can we sync after
     standup?"

QA-Specific Updates to Share:

  • Test coverage status
  • Critical bugs found
  • Blocked test scenarios
  • Release readiness status

Bug Triage Sessions

When: 2-3 times per week, 30 minutes
Who: Dev Lead + QA Lead + PM

Process:

For each bug:

1. Verify reproducibility (2 min)
   โ”œโ”€ Can we reproduce it?
   โ””โ”€ Is it actually a bug?

2. Assess severity (2 min)
   โ”œโ”€ How many users affected?
   โ”œโ”€ Workaround available?
   โ””โ”€ Data loss risk?

3. Decide priority (1 min)
   โ”œโ”€ Fix now (critical)
   โ”œโ”€ Fix this sprint (high)
   โ”œโ”€ Backlog (medium/low)
   โ””โ”€ Won't fix (not a bug, by design)

4. Assign owner (1 min)

Total: ~6 min per bug
10 bugs = 60 minutes session

Priority Framework:

๐Ÿ”ด P0 - Critical (Fix immediately, all hands on deck)
โ”œโ”€ Production down
โ”œโ”€ Data loss/corruption
โ”œโ”€ Security vulnerability
โ””โ”€ Payment processing broken

๐ŸŸก P1 - High (Fix this sprint)
โ”œโ”€ Major feature broken
โ”œโ”€ Affects many users
โ”œโ”€ No workaround
โ””โ”€ Blocks other work

๐ŸŸข P2 - Medium (Next sprint)
โ”œโ”€ Minor feature broken
โ”œโ”€ Affects some users
โ”œโ”€ Workaround exists
โ””โ”€ Cosmetic issues

๐Ÿ”ต P3 - Low (Backlog)
โ”œโ”€ Edge cases
โ”œโ”€ Rare scenarios
โ”œโ”€ Polish items
โ””โ”€ Nice-to-have

โšช P4 - Won't Fix
โ”œโ”€ By design
โ”œโ”€ Out of scope
โ”œโ”€ Not reproducible
โ””โ”€ Obsolete

๐Ÿ“Š Modern QA Metrics & Dashboards

Metrics That Matter in Agile

Sprint Health Dashboard:

๐Ÿ“Š Sprint 24 - Week 2

VELOCITY & CAPACITY
โ””โ”€ Story Points Committed: 45
โ””โ”€ Story Points Tested: 38 (84%)
โ””โ”€ Story Points Done: 35 (78%)
โ””โ”€ At Risk: 2 stories (not tested yet)

TEST EXECUTION
โ””โ”€ Manual Tests: 85% complete (120/141)
โ””โ”€ Automated Tests: Running in CI (234/234 passing)
โ””โ”€ Exploratory Sessions: 2/3 complete

DEFECTS
โ””โ”€ Opened This Sprint: 12
โ””โ”€ Fixed This Sprint: 10
โ””โ”€ Still Open: 5 (2 critical, 3 medium)
โ””โ”€ Escaped from Last Sprint: 1

AUTOMATION
โ””โ”€ New Tests Automated: 8
โ””โ”€ Flaky Tests Fixed: 2
โ””โ”€ Coverage Trend: 72% โ†’ 75% โ†—๏ธ

RELEASE READINESS: ๐ŸŸก YELLOW
โœ… All critical bugs fixed
โš ๏ธ 2 stories still in testing
โš ๏ธ 1 high-priority bug open

Leading vs Lagging Indicators

Lagging Indicators (Rearview Mirror):

  • Bugs found in production
  • Test coverage percentage
  • Number of test cases

Leading Indicators (Windshield):

  • Shift-left activities (requirements review, three amigos)
  • Automated test growth rate
  • Time to detect bugs (MTTD)
  • % of stories with acceptance tests before coding

Focus on leading indicators to prevent problems!


๐ŸŽ“ Conclusion: QA in the Fast Lane

Modern QA isn't about being a gatekeeper at the end of development. It's about being a quality advocate throughout the entire process.

Key Takeaways

  1. Shift left aggressively - Get involved early, catch issues when they're cheap to fix
  2. Automate strategically - Fast feedback loops in CI/CD, pyramid-shaped test suite
  3. Prioritize ruthlessly - You can't test everything, so test what matters most
  4. Collaborate continuously - Three Amigos, pairing, daily communication
  5. Measure what helps - Leading indicators predict quality, lagging indicators confirm it

Your Modern QA Checklist

Sprint Planning:

โ–ก Review all stories before sprint starts
โ–ก Estimate testing effort honestly
โ–ก Identify risks and dependencies
โ–ก Plan Three Amigos sessions
โ–ก Block time for automation

During Sprint:

โ–ก Three Amigos for each story
โ–ก Start testing as soon as possible
โ–ก Provide fast feedback to developers
โ–ก Automate while developing
โ–ก Update automation in CI/CD

Sprint Review:

โ–ก Demo tested features
โ–ก Report on quality metrics
โ–ก Discuss escaped defects
โ–ก Share lessons learned
โ–ก Plan next sprint improvements

The Modern QA Mindset

Old mindset: "Find all the bugs before release"
New mindset: "Help the team build quality in from the start"

Old mindset: "QA is responsible for quality"
New mindset: "Everyone is responsible for quality, QA enables it"

Old mindset: "Manual testing is QA's job"
New mindset: "Automation enables strategic manual testing"

Old mindset: "We're a bottleneck, development waits for us"
New mindset: "We work in parallel, enabling faster delivery"

What's Next?

In Part 9, we'll tackle one of the most important QA skills: Writing Bug Reports That Actually Get Fixed.

We'll cover:

  • The anatomy of a great bug report
  • How to communicate with developers effectively
  • Prioritizing and triaging bugs
  • Following up without being annoying
  • Building trust with the development team

Coming Next Week:
Part 9: Bug Reports That Get Fixed - The Art of Communication
๐Ÿ›


๐Ÿ“š Series Progress

โœ… Part 1: Requirement Analysis
โœ… Part 2: Equivalence Partitioning & BVA
โœ… Part 3: Decision Tables & State Transitions
โœ… Part 4: Pairwise Testing
โœ… Part 5: Error Guessing & Exploratory Testing
โœ… Part 6: Test Coverage Metrics
โœ… Part 7: Real-World Case Study
โœ… Part 8: Modern QA Workflow โ† You just finished this!
โฌœ Part 9: Bug Reports That Get Fixed
โฌœ Part 10: The QA Survival Kit


๐Ÿงฎ Quick Reference Card

Daily QA Workflow

MORNING:
โ–ก Check CI/CD pipeline status
โ–ก Review overnight test results
โ–ก Attend daily standup
โ–ก Respond to bug assignments

MIDDAY:
โ–ก Test completed stories
โ–ก Pair with developers on new stories
โ–ก Write/update test automation
โ–ก Three Amigos sessions

AFTERNOON:
โ–ก Exploratory testing
โ–ก Update test documentation
โ–ก Bug triage/verification
โ–ก Plan tomorrow's work

BEFORE LEAVING:
โ–ก Update story status in Jira
โ–ก Document blockers
โ–ก Check CI/CD still green
โ–ก Tomorrow's prep

Three Amigos Template

STORY: [Story title]
DATE: [Date]
ATTENDEES: [Dev, PO, QA]

BUSINESS VALUE:
[Why are we building this?]

TECHNICAL APPROACH:
[How will we build it?]

EDGE CASES & RISKS:
[What could go wrong?]

ACCEPTANCE CRITERIA:
[What does done look like?]

DEFINITION OF DONE:
โ–ก [Checklist items]

QUESTIONS RESOLVED:
Q: [Question]
A: [Answer]

ACTIONS:
โ–ก [Who does what]

Remember: Quality is a team sport. Be the teammate that makes everyone better! ๐ŸŽฏ

What's your biggest Agile/DevOps QA challenge? Share in the comments!