Surviving Your First Week with Zero Knowledge Transfer
A practical survival guide for QE engineers joining a team with no handoff from the previous QE
The Reality: You're Starting from Scratch
Your first week as a QE, and there's no previous QE to show you the ropes. The last person left months ago (or there never was one). You're staring at unfamiliar code, unclear expectations, and a sinking feeling.
Take a breath. You've got this. This guide shows you exactly how to survive—and thrive—in a zero-KT (Knowledge Transfer) environment.
Day 1: Set Up & Reconnaissance
Morning: Administrative Setup (2-3 hours)
✓ Access checklist:
- [ ] GitHub/GitLab repository access (all relevant repos)
- [ ] JIRA/issue tracker access
- [ ] Slack/Teams channels (dev, QA, operations)
- [ ] VPN and internal network access
- [ ] Test environment URLs and credentials
- [ ] CI/CD dashboard access (Jenkins, GitHub Actions)
- [ ] Database access (read-only initially)
- [ ] Monitoring tools (Grafana, Datadog, logs)
- [ ] Shared documentation (Confluence, Notion, Google Drive)Pro tip: Create a checklist and send it to your manager. They'll appreciate the proactivity.
Afternoon: Repository Discovery (3-4 hours)
Step 1: Clone and explore repositories
# Clone main application repo
git clone https://github.com/company/shopping-platform.git
cd shopping-platform
# Check the README
cat README.md
# Look for test directories
find . -name "*test*" -type d
find . -name "*spec*" -type d
# Count test files
find . -name "*Test.java" | wc -l
find . -name "*.spec.js" | wc -lStep 2: Identify testing patterns
# Find test frameworks being used
grep -r "import.*junit" .
grep -r "describe\|it\(" . --include="*.js"
grep -r "RestAssured" . --include="*.java"
# Find configuration files
find . -name "*.xml" -name "*test*"
find . -name "testng.xml"
find . -name "jest.config.js"What to note:
- Test framework (JUnit, TestNG, Jest, etc.)
- Test directory structure
- How many tests exist (rough count)
- Any README files in test directories
- Configuration files
Evening: Read, Don't Code (1-2 hours)
Resist the urge to start coding! Instead:
- Read existing tests - Start with simple ones:
// Example: ProductServiceTest.java
// This tells you what the ProductService does and how it's tested
@Test
public void testGetProductById() {
Product product = productService.getById("SKU-123");
assertNotNull(product);
assertEquals("Laptop", product.getName());
}
// ☝️ Learning: There's a ProductService with a getById method
// It returns Product objects with names-
Understand test naming patterns:
testMethodName_Scenario_ExpectedBehaviorgivenX_whenY_thenZ- Find the pattern your codebase uses
-
Find the most recent tests - They show current standards:
git log --name-only --since="6 months ago" -- "*Test.java" | grep Test.java | sort | uniqDay 2: Run Everything That Exists
Morning: Local Environment Setup
Goal: Run tests locally successfully
# Java project example
mvn clean install # Might take 10-30 minutes first time
mvn test # Run all tests
# Node.js project example
npm install
npm test
# Note any failures or errors - they're clues!When things fail (they will):
🔴 Build fails with missing dependencies
→ Check if you need to install databases, Redis, etc.
→ Ask team for Docker Compose setup
🔴 Tests fail due to configuration
→ Look for .env.example or config.example files
→ Ask for environment variable values
🔴 Tests fail randomly (flaky tests!)
→ Note which ones - this is valuable information
→ Rerun them to confirm flakiness
→ Document the flaky tests for later fixing
🔴 Hundreds of test failures
→ Don't panic! Tests might be outdated
→ Focus on understanding WHY they fail
→ Check git history: when did they last pass?Afternoon: Map the Test Coverage
Create a simple spreadsheet/document:
| Feature Area | Has Tests? | Test Type | Status | Notes |
|---|---|---|---|---|
| Product Search | Yes | Unit + API | ✅ Passing | 45 tests |
| Shopping Cart | Yes | API | ⚠️ Flaky | 3/20 flaky |
| Checkout | Partial | E2E | ❌ Failing | Need to fix |
| Payment | No | - | ❌ No tests | HIGH PRIORITY |
| Order Mgmt | Yes | Unit | ✅ Passing | Good coverage |
This map is gold. It shows:
- What's tested (you can modify with confidence)
- What's not tested (high-risk areas)
- What's broken (needs fixing)
- Where to focus your efforts
Evening: Document Your Findings
Create a "QE Onboarding Notes" document:
# QE Onboarding - [Your Name] - Week 1
## Repository Structure
- Main repo: shopping-platform (Java 11, Spring Boot)
- Frontend repo: shopping-ui (React, TypeScript)
- Test location: `/src/test/java/`
## Test Frameworks
- Backend: JUnit 5, RestAssured, Mockito
- Frontend: Jest, React Testing Library
- E2E: Selenium WebDriver (older), considering Playwright
## Current Test Status
- Total API tests: ~450
- Passing: 425 (94%)
- Flaky: 15 (need investigation)
- Failing: 10 (known issues, tickets exist)
## Test Data Strategy
- Uses test database: `test_shopping_db`
- Data reset before each test class
- Test users: test1@example.com - test50@example.com
## CI/CD Pipeline
- GitHub Actions for PR builds
- Jenkins for nightly regression
- Tests run in Docker containers
- Average pipeline time: 25 minutes
## Questions to Ask
1. Who owns the payment service? (Tests are missing)
2. What's the plan for flaky tests?
3. Can we upgrade Selenium tests to Playwright?Day 3-4: Shadow and Learn
Strategy: Learn by Observation
Morning standups:
- Note which features are in progress
- Note which bugs are reported
- Identify who owns what
Pair with developers:
"Hey [Dev Name], I'm the new QE getting up to speed.
Could I shadow you for 30 minutes while you test your feature?
I want to understand how you verify your changes."What to observe:
- How they test locally
- What APIs they call
- What data they use
- What edge cases they check
- What they DON'T test (gaps for you to fill)
Decode the Jira/Ticket System
Search for patterns:
- Search: "test failed" or "flaky test" - shows reliability issues
- Search: "production bug" - shows what escapes to prod
- Filter: Bugs created in last 3 months - recent pain points
- Filter: "QA" label - tickets related to testingRead bug descriptions:
- How are bugs reported?
- What information is expected?
- What reproduction steps look like?
Read test task tickets:
- What level of detail is expected?
- How are test cases documented?
- Where are test results reported?
Day 5: Make Your First Contribution
Low-Risk Wins to Build Confidence
Option 1: Fix a Flaky Test
// Before (flaky due to hardcoded wait)
Thread.sleep(2000); // 😱 Don't do this
clickSubmitButton();
// After (explicit wait for element)
WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
wait.until(ExpectedConditions.elementToBeClickable(submitButton));
submitButton.click();Option 2: Improve Test Readability
// Before
@Test
public void test1() {
assertEquals(true, service.process("123", "xyz", 5));
}
// After
@Test
public void testProcessOrder_WithValidData_ReturnsSuccess() {
// Arrange
String orderId = "123";
String userId = "xyz";
int quantity = 5;
// Act
boolean result = service.process(orderId, userId, quantity);
// Assert
assertTrue(result, "Order processing should succeed with valid data");
}Option 3: Add a Missing Test
@Test
public void testAddToCart_WithZeroQuantity_ShouldRejectRequest() {
// Arrange
CartRequest request = new CartRequest("SKU-123", 0);
// Act & Assert
assertThrows(InvalidQuantityException.class, () -> {
cartService.addItem(request);
});
}Submit your first PR:
Title: Fix flaky test in CheckoutServiceTest
Description:
Fixed flaky test `testCheckoutFlow` by replacing Thread.sleep
with explicit WebDriverWait. Test now consistently passes.
Before: Failed 3/10 runs
After: Passed 20/20 runs
Related: JIRA-456 (Flaky test ticket)Week 2: Deepen Your Knowledge
Monday-Tuesday: Architecture Deep Dive
Create a service dependency diagram:
┌─────────┐ ┌──────────┐ ┌─────────┐
│ UI │─────→│ API │─────→│ Service │
└─────────┘ │ Gateway │ └─────────┘
└──────────┘ │
│ ↓
↓ ┌──────────┐
┌─────────┐ │ Database │
│ Cache │ └──────────┘
└─────────┘Ask in 1-on-1 with manager/lead:
- "Can you walk me through a typical user journey from UI to database?"
- "What are the top 3 most critical features I should understand?"
- "Which services are most prone to bugs?"
- "What's our current testing strategy and coverage goals?"
Wednesday-Thursday: Test Your First Feature
Pick up a small test task:
- Look for tickets labeled "testing" or "QA"
- Choose something marked "low complexity"
- Ask: "Is this a good first task for me?"
Write comprehensive test:
@Test
public void testProductSearch_MultipleFilters() {
// Test: Search laptops under $1000 with 5-star rating
SearchRequest request = SearchRequest.builder()
.keyword("laptop")
.maxPrice(1000)
.minRating(5)
.build();
SearchResponse response = productService.search(request);
// Verify all results match criteria
assertFalse(response.getProducts().isEmpty());
response.getProducts().forEach(product -> {
assertTrue(product.getName().toLowerCase().contains("laptop"));
assertTrue(product.getPrice() <= 1000);
assertTrue(product.getRating() >= 5);
});
}Friday: Reflection and Planning
Document what you've learned:
- 5 most important services
- 3 critical user flows
- Top 3 risks/gaps in testing
- Questions still unanswered
Present to your manager:
Week 2 Summary:
✅ Wrote 5 new tests for Product Search feature
✅ Fixed 2 flaky tests in CheckoutServiceTest
✅ Documented test coverage gaps in Payment Service
⏳ Need: Access to production logs for debugging
📚 Learning: How our promotion pricing algorithm worksSurvival Tips for Zero-KT Environments
1. The Code IS the Documentation
// When docs are missing, read tests:
@Test
public void testPricingRules() {
// This test SHOWS how pricing works
Price price = pricingService.calculate(
basePrice: 100,
discountPercent: 10,
taxRate: 0.08
);
// Expected: (100 - 10%) + 8% tax = $97.20
assertEquals(97.20, price.getFinalAmount());
}2. Git History Is Your Friend
# Find who last worked on a file
git log --follow --all -- path/to/ProductService.java
# See what changed recently
git log --since="1 month ago" --oneline
# Find commits related to testing
git log --grep="test" --oneline
# Who to ask about payment service?
git log -- src/payment/ --format="%an" | sort | uniq -c | sort -rn3. Leverage Slack/Teams
# Search for context
Slack search: "payment service" in:qa-channel
Slack search: "flaky test" from:@tech-lead
Slack search: "test environment down"
# Find the right person to ask
Search who posts in #backend-dev
Look at channel descriptions for ownership
Check "pinned items" in channels4. Build Your Own Knowledge Base
Create a personal wiki/notes:
- API endpoints and their purposes
- Test data that works (users, SKUs, promo codes)
- Common errors and solutions
- Who to ask about what
- Useful commands and queries
5. Don't Be Afraid to Ask (Smartly)
❌ Bad: "How does checkout work?"
✅ Good: "I'm testing the checkout flow. I see we call 3 APIs:
inventory, pricing, and payment. Does inventory get
reserved before or after payment succeeds?"
❌ Bad: "Tests are failing, help!"
✅ Good: "CartServiceTest is failing with NullPointerException
on line 45. I see it's trying to call mockUser.getId()
but mockUser is null. Is this a test data setup issue?"
❌ Bad: "Where's the documentation?"
✅ Good: "I'm looking for API documentation for the Payment
Service. I checked Confluence but didn't find it.
Is there a different place to look, or should I read
the OpenAPI spec in the code?"6. Celebrate Small Wins
Week 1: Ran tests locally ✅
Week 2: Fixed first flaky test ✅
Week 3: Wrote first new test ✅
Week 4: Found a real bug in testing ✅
Week 5: Improved test coverage by 5% ✅Red Flags to Watch For
🚩 No tests exist for critical features (payment, checkout) → Bring this up in 1-on-1, propose a plan
🚩 Tests haven't been updated in months → They might be obsolete, verify they still work
🚩 Tests always disabled/skipped
@Disabled("Flaky test, will fix later") // ⚠️ "Later" never comes→ Make it a priority to fix or delete
🚩 No one knows how to run tests → Create clear README with setup instructions
🚩 Tests only run manually, not in CI → Integrate into pipeline ASAP
Your 30-Day Milestone Goals
By Day 30, you should have:
- ✅ Complete picture of test coverage
- ✅ Fixed at least 3-5 flaky tests
- ✅ Written 10+ new tests for a feature
- ✅ Documented testing setup and process
- ✅ Identified top 3 testing gaps/risks
- ✅ Built relationships with 3-5 team members
- ✅ Contributed to at least one sprint successfully
- ✅ Created a testing roadmap for next 3 months
Final Thoughts
Zero-KT is actually a hidden opportunity:
- You question everything (good!)
- You build from first principles
- You create better documentation
- You become the expert faster
- You shape the testing culture
Remember:
- It's okay to feel overwhelmed - everyone does
- Progress over perfection
- Ask questions early and often
- Document everything you learn
- You're not behind - you're ramping up
The first month is the hardest. By month 2, you'll be comfortable. By month 3, you'll be the person answering questions. You've got this! 🚀
Helpful Resources to Bookmark
- Your team's Slack channels (use search liberally)
- Git repository (use
git logandgit blame) - JIRA/ticket system (search for patterns)
- CI/CD dashboard (learn from build logs)
- Test environment URLs (bookmark them all)
- Shared drives/Confluence (whatever docs exist)
Remember: In a zero-KT environment, YOU are creating the knowledge base for the next person. Make it count!
Comments (0)
Loading comments...