๐ Series Navigation:
โ Previous: Part 7 - Real-World Case Study
๐ You are here: Part 8 - Modern QA Workflow
Next: Part 9 - Bug Reports That Get Fixed โ
Introduction: QA in the Age of Agile and DevOps
Welcome to Part 8! We've learned powerful testing techniques and seen them applied in a real case study. But here's a question that keeps QA engineers up at night:
"How do I actually DO all this in a 2-week sprint?"
The reality of modern software development:
- โก Deploys happen daily (or hourly!)
- ๐ Requirements change mid-sprint
- ๐ค Testing happens in parallel with development
- ๐ค CI/CD pipelines run on every commit
- ๐ฑ Multiple platforms, browsers, devices
- โฐ "We need this tested by tomorrow"
Gone are the days when QA was a separate phase at the end. Today, quality is everyone's responsibility, and testing is continuous.
In this article, you'll learn:
- โ How testing fits into modern Agile workflows
- โ Shift-left testing in practice (not just theory)
- โ Building effective CI/CD pipelines
- โ Risk-based test prioritization for tight deadlines
- โ Collaboration patterns that actually work
- โ The Three Amigos and other ceremonies
Let's make modern QA actually work! ๐
๐ The Modern Agile QA Workflow
The Traditional Waterfall Approach (What We Left Behind)
Requirements โ Design โ Development โ QA Testing โ Release
โ
[QA enters here, finds 100 bugs,
everyone blames QA, project delayed]Problems:
- โ Testing happens too late
- โ Bugs are expensive to fix
- โ QA becomes a bottleneck
- โ No collaboration during development
- โ Requirements already stale by testing time
The Modern Agile Approach (Where We Are Now)
QA Present] --> B[Three Amigos
Early Clarification] B --> C[Dev + QA
Parallel Work] C --> D[Continuous Testing
Every Commit] D --> E[Sprint Review
Demo + Retro] E --> F{Done?} F -->|Yes| G[Deploy to Prod] F -->|No| C style A fill:#dbeafe style B fill:#fef3c7 style C fill:#ddd6fe style D fill:#bbf7d0 style E fill:#fecaca style G fill:#86efac
What's Different:
- โ QA involved from day 1
- โ Testing starts before code is written
- โ Continuous feedback loops
- โ Everyone owns quality
- โ Fast iterations
A Week in the Life of a Modern QA Engineer
Monday (Sprint Planning Day)
9:00 AM - Sprint Planning Meeting
โโ QA reviews user stories
โโ Asks clarifying questions
โโ Estimates testing effort
โโ Identifies risks
โโ Commits to sprint capacity
11:00 AM - Story Refinement
โโ Deep dive on 2-3 complex stories
โโ QA identifies testability issues
โโ Team discusses acceptance criteria
โโ QA flags dependencies
2:00 PM - Test Planning
โโ Create high-level test scenarios
โโ Identify automation candidates
โโ Plan test data needs
โโ Update risk matrixTuesday-Wednesday (Early Sprint)
9:00 AM - Three Amigos Sessions
โโ Developer + PM + QA
โโ Walk through user story
โโ Clarify edge cases
โโ Define acceptance criteria
10:00 AM - Test Case Design
โโ Write detailed test cases for story starting tomorrow
โโ Prepare test automation scripts
โโ Set up test environments
2:00 PM - Early Testing
โโ Test stories marked "ready for testing"
โโ Pair with developers on unit tests
โโ Review code for testability
โโ Provide quick feedbackThursday-Friday (Mid-Late Sprint)
9:00 AM - Test Execution
โโ Execute test cases on completed stories
โโ Run automated regression suite
โโ Exploratory testing on new features
โโ Log bugs, verify fixes
1:00 PM - Bug Triage
โโ Review bugs with dev team
โโ Prioritize fixes
โโ Verify bug fixes
โโ Update test cases
4:00 PM - Sprint Preparation
โโ Update documentation
โโ Prepare demo scenarios
โโ Sign off completed stories
โโ Identify technical debtKey Differences from Traditional:
- QA isn't waiting for "QA phase"
- Testing happens in parallel with dev
- Continuous communication
- Faster feedback loops
โฌ ๏ธ Shift-Left Testing: Moving Quality Earlier
What is Shift-Left?
Traditional Approach:
Dev โ Dev โ Dev โ Testing โ Production
โ
[Find bugs here]
[Expensive to fix!]Shift-Left Approach:
Testing โ Dev + Testing โ Testing โ Production
โ โ โ
[Design] [Implementation] [Verification]
[Cheap] [Moderate] [Expensive]The principle: The earlier you find defects, the cheaper they are to fix.
Cost of defects:
- ๐ฐ Found during requirements: $1
- ๐ฐ๐ฐ Found during development: $10
- ๐ฐ๐ฐ๐ฐ Found during QA: $100
- ๐ฐ๐ฐ๐ฐ๐ฐ Found in production: $1,000+
Shift-Left in Practice
1. Requirements Review (Day 0)
Traditional QA: "We'll test it when it's done."
Shift-Left QA:
Story Received: "User can export tasks"
QA Questions (Before Any Code):
โ What formats? (CSV, Excel, PDF?)
โ All tasks or filtered tasks?
โ Include completed tasks?
โ File size limits?
โ Email or download?
โ What if export fails?
Result: 6 ambiguities caught before a single line of code written!Action Items:
- โ Review every user story before sprint starts
- โ Add testability acceptance criteria
- โ Identify missing scenarios
- โ Flag technical risks
2. Unit Test Collaboration (Day 1-3)
Traditional QA: "Unit tests are the developer's job."
Shift-Left QA:
QA + Developer Pair Programming Session:
Developer: "I'm writing validateEmail() function"
QA: "Great! Let's think about test cases:
- Valid emails (with +, with subdomain)
- Invalid emails (missing @, missing domain)
- Null, empty string
- SQL injection attempts
- 320-character email (max length)"
Result: Developer writes comprehensive unit tests,
QA provides edge cases developers might missAction Items:
- โ Pair with developers on unit tests
- โ Review test coverage reports
- โ Suggest missing test cases
- โ Share testing expertise
3. API Testing (Day 2-4)
Traditional QA: "Wait for UI to be done, then test through UI."
Shift-Left QA:
API Ready โ QA Tests API Directly
โโ Faster feedback
โโ Backend bugs found immediately
โโ UI can be built in parallel
โโ API tests become regression suite
Example:
POST /api/tasks
{
"title": "Test Task",
"description": "<script>alert('xss')</script>"
}
Response: 400 Bad Request
{"error": "Invalid characters in description"}
โ
XSS protection verified before UI exists!Action Items:
- โ Test APIs as soon as endpoints exist
- โ Use Postman/Insomnia/REST Assured
- โ Automate API tests early
- โ Don't wait for UI
4. Test Automation (Day 1-5)
Traditional QA: "Automate after manual testing proves it works."
Shift-Left QA:
Write automation scripts WHILE features are being developed:
Day 1: Feature branch created โ Create test skeleton
Day 2: API ready โ Automate API tests
Day 3: UI component ready โ Automate happy path
Day 4: Feature complete โ Add negative tests
Day 5: Ready for merge โ Full automation suite ready
Result: Automation available immediately for regression!Action Items:
- โ Automate in parallel with development
- โ Start with API-level tests
- โ Add UI tests incrementally
- โ Make automation part of Definition of Done
๐ค CI/CD Integration: Testing at the Speed of DevOps
The CI/CD Pipeline for QA
10 sec] C --> D{Pass?} D -->|No| E[โ Notify Developer] D -->|Yes| F[Integration Tests
3 min] F --> G{Pass?} G -->|No| E G -->|Yes| H[Deploy to Dev] H --> I[Smoke Tests
5 min] I --> J{Pass?} J -->|No| E J -->|Yes| K[E2E Tests
20 min] K --> L{Pass?} L -->|No| E L -->|Yes| M[Deploy to Staging] M --> N[Manual Acceptance
As needed] N --> O[โ Deploy to Production] style A fill:#dbeafe style C fill:#86efac style F fill:#fbbf24 style I fill:#fbbf24 style K fill:#f87171 style O fill:#4ade80
Pipeline Design Principles
1. Fast Feedback
Goal: Developer knows within 5 minutes if they broke something
Pipeline Strategy:
โโ Unit tests: Must run in < 1 minute
โโ Integration tests: Must run in < 5 minutes
โโ E2E tests: Can run in background (20-30 min)
โโ Full suite: Nightly or pre-release only
Example - TaskMaster 3000:
โโ Commit โ Unit tests (15 sec) โ
โโ โ Integration tests (3 min) โ
โโ โ Deploy to dev (30 sec) โ
โโ โ Smoke tests (2 min) โ
โโ Total time to dev environment: < 7 minutes2. Fail Fast
Run cheapest/fastest tests first:
1. Linting & static analysis (seconds)
2. Unit tests (seconds-minutes)
3. Integration tests (minutes)
4. E2E tests (minutes-hours)
Don't run expensive tests if cheap ones fail!3. Parallel Execution
Instead of: Test 1 โ Test 2 โ Test 3 (30 min total)
Do: Test 1 โ Test 2 โ Test 3 (10 min total)
Tools:
- Selenium Grid (parallel browser tests)
- Jenkins: Parallel stages
- GitHub Actions: Matrix builds
- CircleCI: Parallelism option4. Environment Management
Problem: "Works on my machine!" ๐คท
Solution: Containerization
โโ Docker for consistent environments
โโ Docker Compose for multi-service setups
โโ Kubernetes for production-like staging
โโ Infrastructure as Code (Terraform)
Example docker-compose.yml:
version: '3'
services:
app:
build: .
environment:
- NODE_ENV=test
db:
image: postgres:14
redis:
image: redis:7Test Types in CI/CD
Commit Stage (Every commit, < 5 min)
โ
Linting (ESLint, Pylint)
โ
Unit tests (900 tests, 15 sec)
โ
Code coverage check (> 70%)
โ
Security scan (npm audit, Snyk)Acceptance Stage (Every PR, < 15 min)
โ
Integration tests (450 tests, 8 min)
โ
API contract tests (Pact)
โ
Component tests
โ
Build Docker imageDeployment Stage (After merge, < 30 min)
โ
Deploy to dev environment
โ
Smoke tests (20 critical paths)
โ
E2E tests (120 tests, 20 min)
โ
Performance tests (basic)Release Stage (Scheduled/on-demand)
โ
Full regression suite
โ
Load testing
โ
Security penetration tests
โ
Cross-browser tests (BrowserStack)
โ
Accessibility testsโ๏ธ Risk-Based Test Prioritization
The Harsh Reality
Manager: "We deploy in 2 hours. Can you test everything?"
QA: "No, but I can test what matters!"
This is where risk-based testing saves you.
Risk Assessment Matrix
| Feature | Biz | User | Comp | Change | Risk | Test |
|---|---|---|---|---|---|---|
| Auth | 5C | 5A | 3M | 1S | ๐ด14/20 | 1 - Full |
| Task Remind | 4H | 4M | 4C | 5N | ๐ด17/20 | 1 - Full |
| Task Export | 2L | 3S | 2S | 1S | ๐ข8/20 | 3 - Smoke |
| Theme Select | 1M | 2P | 1S | 1S | ๐ข5/20 | 4 - Skip |
Legend
Biz (Business Impact): 5C=Critical ๐ด, 4H=High ๐ด, 2L=Low ๐ข, 1M=Minimal ๐ข
User: 5A=All, 4M=Many, 3S=Some, 2P=Preference
Comp (Complexity): 1S=Simple, 3M=Moderate, 4C=Complex
Change (Change Freq): 1S=Stable, 5N=New
Risk: ๐ด=High, ๐ข=Low
Test (Testing Priority): 1=Full, 3=Smoke, 4=Skip
The 2-Hour Emergency Test Plan
Scenario: Critical hotfix needs to deploy in 2 hours. What do you test?
EMERGENCY TESTING PROTOCOL - 2 HOUR LIMIT
Hour 1: Critical Paths (80% of value)
โโ [15 min] Authentication flow
โ โโ Login, logout, session management
โโ [15 min] Core task operations
โ โโ Create, edit, complete, delete tasks
โโ [15 min] Data integrity
โ โโ No data loss, corruption, or leaks
โโ [10 min] Payment (if applicable)
โ โโ Checkout, payment processing
โโ [5 min] Smoke test in production-like environment
Hour 2: Risk Areas (15% of value)
โโ [20 min] Areas changed by hotfix
โ โโ Thorough testing of modified code
โโ [15 min] Integration points
โ โโ External APIs, database, email
โโ [10 min] Security basics
โ โโ SQL injection, XSS, auth bypass
โโ [10 min] Error handling
โ โโ Graceful degradation
โโ [5 min] Final sanity check
Skipped (5% of value):
โ Nice-to-have features
โ Cosmetic UI elements
โ Rarely-used functionality
โ Comprehensive browser testing
Document what was NOT tested!Risk-Based Test Selection Algorithm
def prioritize_tests(tests, time_available_minutes):
"""
Prioritize tests based on risk and time
"""
for test in tests:
test.score = (
test.business_impact * 5 +
test.user_impact * 4 +
test.defect_history * 3 +
test.code_complexity * 2 +
test.change_frequency * 3
)
tests.sort(key=lambda t: t.score, reverse=True)
selected_tests = []
time_used = 0
for test in tests:
if time_used + test.execution_time <= time_available_minutes:
selected_tests.append(test)
time_used += test.execution_time
else:
break
return selected_tests, tests[len(selected_tests):]๐ค Effective Collaboration Patterns
The Three Amigos Meeting
Who: Developer + Product Owner + QA
When: Before development starts
Duration: 30-60 minutes per story
Goal: Shared understanding
Agenda:
1. PO Explains the "Why" (5 min)
โโ Business value
โโ User problem being solved
โโ Success criteria
2. Developer Explains the "How" (10 min)
โโ Technical approach
โโ Dependencies
โโ Risks
โโ Time estimate
3. QA Explains the "What If" (15 min)
โโ Edge cases
โโ Error scenarios
โโ Testability concerns
โโ Non-functional requirements
โโ Test strategy
4. Together: Refine Acceptance Criteria (15 min)
โโ What does "done" look like?
โโ What are we NOT building?
โโ What can break?
โโ How will we test it?
5. Agreements & Actions (5 min)
โโ Final acceptance criteria
โโ Definition of Done
โโ When testing can start
โโ Who does whatExample Three Amigos Output:
STORY: Export Tasks to CSV
BEFORE Three Amigos:
"User can export tasks to CSV"
AFTER Three Amigos:
โ
Export filtered tasks (respects current filters)
โ
Include: title, description, status, priority, due date
โ
Format: CSV with UTF-8 encoding
โ
File naming: tasks_export_YYYY-MM-DD_HH-MM.csv
โ
Max 10,000 tasks per export
โ
Download in browser (not email)
โ
Error handling: Show error if > 10,000 tasks
โ
Tested: Chrome, Firefox, Safari
โ
Performance: Should complete in < 3 seconds
Questions Resolved:
Q: Include completed tasks? A: Yes, if they're in current filter
Q: Excel support? A: Future story, CSV only for now
Q: Email option? A: Future story
Q: Column order? A: Title, Status, Priority, Due Date, Description
Definition of Done:
โก Feature implemented
โก Unit tests written (>80% coverage)
โก API tests automated
โก Manual testing completed
โก Works in Chrome, Firefox, Safari
โก Documentation updated
โก PO sign-off receivedDaily Stand-ups (QA Perspective)
Bad Stand-up:
QA: "Yesterday I tested stuff. Today I'll test more stuff. No blockers."Good Stand-up:
QA: "Yesterday I tested the reminder feature - found 3 bugs,
2 are high priority (shared in Slack #bugs channel).
Today I'm finishing reminder testing and starting on
the export feature once the API is ready.
Blocker: I need the staging environment fixed - it's been
down since yesterday afternoon. Mike, can we sync after
standup?"QA-Specific Updates to Share:
- Test coverage status
- Critical bugs found
- Blocked test scenarios
- Release readiness status
Bug Triage Sessions
When: 2-3 times per week, 30 minutes
Who: Dev Lead + QA Lead + PM
Process:
For each bug:
1. Verify reproducibility (2 min)
โโ Can we reproduce it?
โโ Is it actually a bug?
2. Assess severity (2 min)
โโ How many users affected?
โโ Workaround available?
โโ Data loss risk?
3. Decide priority (1 min)
โโ Fix now (critical)
โโ Fix this sprint (high)
โโ Backlog (medium/low)
โโ Won't fix (not a bug, by design)
4. Assign owner (1 min)
Total: ~6 min per bug
10 bugs = 60 minutes sessionPriority Framework:
๐ด P0 - Critical (Fix immediately, all hands on deck)
โโ Production down
โโ Data loss/corruption
โโ Security vulnerability
โโ Payment processing broken
๐ก P1 - High (Fix this sprint)
โโ Major feature broken
โโ Affects many users
โโ No workaround
โโ Blocks other work
๐ข P2 - Medium (Next sprint)
โโ Minor feature broken
โโ Affects some users
โโ Workaround exists
โโ Cosmetic issues
๐ต P3 - Low (Backlog)
โโ Edge cases
โโ Rare scenarios
โโ Polish items
โโ Nice-to-have
โช P4 - Won't Fix
โโ By design
โโ Out of scope
โโ Not reproducible
โโ Obsolete๐ Modern QA Metrics & Dashboards
Metrics That Matter in Agile
Sprint Health Dashboard:
๐ Sprint 24 - Week 2
VELOCITY & CAPACITY
โโ Story Points Committed: 45
โโ Story Points Tested: 38 (84%)
โโ Story Points Done: 35 (78%)
โโ At Risk: 2 stories (not tested yet)
TEST EXECUTION
โโ Manual Tests: 85% complete (120/141)
โโ Automated Tests: Running in CI (234/234 passing)
โโ Exploratory Sessions: 2/3 complete
DEFECTS
โโ Opened This Sprint: 12
โโ Fixed This Sprint: 10
โโ Still Open: 5 (2 critical, 3 medium)
โโ Escaped from Last Sprint: 1
AUTOMATION
โโ New Tests Automated: 8
โโ Flaky Tests Fixed: 2
โโ Coverage Trend: 72% โ 75% โ๏ธ
RELEASE READINESS: ๐ก YELLOW
โ
All critical bugs fixed
โ ๏ธ 2 stories still in testing
โ ๏ธ 1 high-priority bug openLeading vs Lagging Indicators
Lagging Indicators (Rearview Mirror):
- Bugs found in production
- Test coverage percentage
- Number of test cases
Leading Indicators (Windshield):
- Shift-left activities (requirements review, three amigos)
- Automated test growth rate
- Time to detect bugs (MTTD)
- % of stories with acceptance tests before coding
Focus on leading indicators to prevent problems!
๐ Conclusion: QA in the Fast Lane
Modern QA isn't about being a gatekeeper at the end of development. It's about being a quality advocate throughout the entire process.
Key Takeaways
- Shift left aggressively - Get involved early, catch issues when they're cheap to fix
- Automate strategically - Fast feedback loops in CI/CD, pyramid-shaped test suite
- Prioritize ruthlessly - You can't test everything, so test what matters most
- Collaborate continuously - Three Amigos, pairing, daily communication
- Measure what helps - Leading indicators predict quality, lagging indicators confirm it
Your Modern QA Checklist
Sprint Planning:
โก Review all stories before sprint starts
โก Estimate testing effort honestly
โก Identify risks and dependencies
โก Plan Three Amigos sessions
โก Block time for automationDuring Sprint:
โก Three Amigos for each story
โก Start testing as soon as possible
โก Provide fast feedback to developers
โก Automate while developing
โก Update automation in CI/CDSprint Review:
โก Demo tested features
โก Report on quality metrics
โก Discuss escaped defects
โก Share lessons learned
โก Plan next sprint improvementsThe Modern QA Mindset
Old mindset: "Find all the bugs before release"
New mindset: "Help the team build quality in from the start"
Old mindset: "QA is responsible for quality"
New mindset: "Everyone is responsible for quality, QA enables it"
Old mindset: "Manual testing is QA's job"
New mindset: "Automation enables strategic manual testing"
Old mindset: "We're a bottleneck, development waits for us"
New mindset: "We work in parallel, enabling faster delivery"
What's Next?
In Part 9, we'll tackle one of the most important QA skills: Writing Bug Reports That Actually Get Fixed.
We'll cover:
- The anatomy of a great bug report
- How to communicate with developers effectively
- Prioritizing and triaging bugs
- Following up without being annoying
- Building trust with the development team
Coming Next Week:
Part 9: Bug Reports That Get Fixed - The Art of Communication ๐
๐ Series Progress
โ
Part 1: Requirement Analysis
โ
Part 2: Equivalence Partitioning & BVA
โ
Part 3: Decision Tables & State Transitions
โ
Part 4: Pairwise Testing
โ
Part 5: Error Guessing & Exploratory Testing
โ
Part 6: Test Coverage Metrics
โ
Part 7: Real-World Case Study
โ
Part 8: Modern QA Workflow โ You just finished this!
โฌ Part 9: Bug Reports That Get Fixed
โฌ Part 10: The QA Survival Kit
๐งฎ Quick Reference Card
Daily QA Workflow
MORNING:
โก Check CI/CD pipeline status
โก Review overnight test results
โก Attend daily standup
โก Respond to bug assignments
MIDDAY:
โก Test completed stories
โก Pair with developers on new stories
โก Write/update test automation
โก Three Amigos sessions
AFTERNOON:
โก Exploratory testing
โก Update test documentation
โก Bug triage/verification
โก Plan tomorrow's work
BEFORE LEAVING:
โก Update story status in Jira
โก Document blockers
โก Check CI/CD still green
โก Tomorrow's prepThree Amigos Template
STORY: [Story title]
DATE: [Date]
ATTENDEES: [Dev, PO, QA]
BUSINESS VALUE:
[Why are we building this?]
TECHNICAL APPROACH:
[How will we build it?]
EDGE CASES & RISKS:
[What could go wrong?]
ACCEPTANCE CRITERIA:
[What does done look like?]
DEFINITION OF DONE:
โก [Checklist items]
QUESTIONS RESOLVED:
Q: [Question]
A: [Answer]
ACTIONS:
โก [Who does what]Remember: Quality is a team sport. Be the teammate that makes everyone better! ๐ฏ
What's your biggest Agile/DevOps QA challenge? Share in the comments!