Dumebi Okolo
Dumebi Okolo

How to Run Playwright Tests Without the Pain: 14 Lessons Learned after 500+ Tests

A reader-submitted guide sharing 14 lessons learned from writing 500 Playwright tests — essential tips for beginners to avoid common pitfalls.

How to Run Playwright Tests Without the Pain: 14 Lessons Learned after 500+ Tests

When I first started working with Playwright tests, I thought getting a green checkmark meant success. From my repository of test failures, I quickly learned otherwise.

Tests that looked solid on my machine wound up as failed tests. A test suite that ran in minutes at the beginning ballooned into half-hour pipelines. Worst of all, I had no idea which failures were caused by bad code, flaky tests, or just environment quirks.

Over the course of writing a lot of Playwright tests, I've made my fair share of mistakes. The good news? I also learned how to avoid them. This article is a collection of hard lessons that helped me stabilize my test suite, speed up execution, and trust my test results.

Here are 14 things I wish I knew before writing 500 Playwright Test Scripts.

1. Structure Your Test Cases From the Start

In my initial stages of using Playwright to write tests, I dumped all my test files into a single Playwright project folder. It worked for the first ten tests, maybe? However, by the time I had gotten to my fiftieth, I was scrolling endlessly to find the right file. When something failed, I couldn't easily tell which part of the app it belonged to. I have found it important to organise my test directory.

    # old structure
    /tests
      login.test.ts
      checkout.ts
      utils.ts
      test1.ts
      test2.ts
    # what I do now
    /tests
      /e2e
        checkout.spec.ts
        login.spec.ts
      /helpers
        auth.ts
        data.ts

Here's the difference: Keeping up with this new way of structuring my files has helped me:

  • Keep test files in the Playwright project folder focused: one flow per file.
  • Use a consistent naming convention in my test files, like .spec.ts **or **.test.ts.
  • Group files logically: e2e (end-to-end), integration, and helpers should live in separate folders.

2. Always Start a Test With a Fresh Fast Execution Browser Context

Sharing browser context or state across tests feels like a 'shortcut' until it causes random failures. I ran a test where one test logged a user in and another reused that state. It created hidden dependencies. It is always better to use brand new browser contexts or different browsers.

// Reusing the same browser context
let context;
test.beforeAll(async ({ browser }) => {
  context = await browser.newContext();
});
test("first test", async () => {
  const page = await context.newPage();
  await page.goto("/login");
});
// Fresh context per test
test("first test", async ({ browser }) => {
  const context = await browser.newContext();
  const page = await context.newPage();
  await page.goto("/login");
});

Here is how to implement this, the trick is:

  • Using a fast execution browser context guarantees isolation. Although it may sometimes feel slower, it prevents subtle bugs that only appear when tests run in parallel.
  • Using Currents shows me when parallel test execution starts interfering due to the shared state. I spot flaky tests faster instead of rerunning suites blindly.
  • Playwright lets you capture the execution trace.

3. Handle Async/Await Correctly

One missing await can ruin hours of debugging. Playwright actions are asynchronous. That means that Playwright starts an action, but doesn't immediately wait for it to finish. Forgetting to use "test await" moves your test on before the action completes. Locally, it might work once; in CI, it fails almost every time.

// missing await
test("checkout flow", async ({ page }) => {
  page.click("#checkout"); // action not awaited
  await page.fill("#card", "4111111111111111");
  await page.click("#submit");
  await expect(page.locator("#confirmation")).toBeVisible();
});
//each async action is awaited
test("checkout flow", async ({ page }) => {
  await page.click("#checkout");
  await page.fill("#card", "4111111111111111");
  await page.click("#submit");
  await expect(page.locator("#confirmation")).toBeVisible();
});

The hard part about learning this particular lesson was that Playwright won’t always throw a clear error; it just fails later. The fix is discipline, really. Always await Playwright actions.

4. Use Independent Tests: Run Tests Independent of Each Other

In my early Playwright tests, one test run created a user, and another test logged in that same user. If the first failed, the second would too, even if the login test logic was perfectly fine. That's a bad test dependency. It is important to create full test isolation for specific tests.

// tests depend on each other
test("create user", async ({ page }) => {
  await page.goto("/signup");
  await page.fill("#name", "Alice");
  await page.click("#submit");
});
test("login user", async ({ page }) => {
  await page.goto("/login");
  await page.fill("#name", "Alice"); // depends on previous test
  await page.click("#submit");
});
// independent data per test
test("login user", async ({ page }) => {
  await page.goto("/signup");
  await page.fill("#name", "Bob");
  await page.click("#submit");
  await page.goto("/login");
  await page.fill("#name", "Bob");
  await page.click("#submit");
});

All the tests should set up their own data or use fixtures. This way, you can run tests in isolation without worrying about order.

Independence got even more valuable for me with Currents. I can rerun only the failed tests in the CI without dragging the whole suite along. It has helped me configure test retry strategies better.

5. Manage Test Data Smartly

Hard-coded values are brittle. I learned this the hard way when multiple Playwright test runs tried to register the same test@example.com. The strategy here is to develop unique values for each run.

//hard-coded value
await page.fill('#email', 'test@example.com');
// unique values per run
await page.fill('#email', test+${Date.now()}@example.com);

That's only one strategy. There are many ways to manage test data:

  • Use mock APIs where full backend state isn't needed.
  • Clean up data after you run tests when possible.
  • Generate random but valid values for each run.
  • Create scenarios for multiple users or single users, various browsers in modern browsers, and web application testing.

6. Use Parallelism Without Collisions

Without care, parallelism turns into a race condition generator. Parallel execution is one of Playwright's biggest strengths. But it only counts if your tests are isolated. The idea is to ensure tests are isolated.

# Run tests in parallel
npx playwright test --workers 4

Best practices for using parallelism without collisions:

  • Randomizing test data to avoid collisions.
  • Using separate browser contexts per test.

7. Debugging Tests: Debug with the Right Features

I started off spending so much time looking at CI logs. From my experience, don't just squint at raw CI logs. I used to do this too until I realised that Playwright gives you great debugging features:

# Run with inspector
npx playwright test --debug
// Pause execution interactively
await page.pause();

Combine with:

  • Trace viewer: A timeline of actions
  • Explore execution logs: Which step failed?
  • Screenshots and video (test execution screencasts): What the browser saw

8. Failed Tests: Use Retry Strategies for Flaky Tests

Sometimes you can't fix a flaky test immediately, and test failure is something you can't shake off easily. In those cases, a test retry strategy can stabilize pipelines.

// playwright.config.ts
import { defineConfig } from "@playwright/test";

export default defineConfig({
  // Give failing tests 3 retry attempts
  retries: 3,
});

This test retry strategy doesn't solve the root problem, but it reduces noise. The key is to track flaky tests and fix them before they multiply. Keep in mind, however, that Currents highlights flaky tests over time. Instead of guessing, you'll see exactly which ones deserve your attention first.

9. Don't Ignore Test Reports

It's very easy to think of completely passing a test run as the end. No, passing Playwright tests isn't the whole story; how you read test results matters. I have learned to study my test reports.

# Generate HTML report
npx playwright show-report

With test reports, you can answer:

  • Which Playwright test cases fail repeatedly?
  • Are inevitable failures tied to specific browsers or test suites?
  • Do execution logs show patterns like timeouts?

10. Optimize End-to-End Test Scenarios for Realism

In testing, realism means designing tests that mirror how users actually interact with your product, using real-world data, flows, and conditions. Among all testing types, end-to-end (E2E) Playwright tests offer the most value because they simulate complete user journeys, not just simplified or isolated examples. But with that realism also comes higher cost and complexity.

test("checkout flow", async ({ page }) => {
  await page.goto("/shop");
  await page.click("text=Add to cart");
  await page.click("text=Checkout");
  await page.fill("#card", "4111111111111111");
  await page.click("#submit");
  await expect(page.locator("text=Order Confirmed")).toBeVisible();
});

So you should:

  • Cover multiple users and roles.
  • Create scenarios.
  • Simulate network requests, including failures.
  • Keep end-to-end tests focused on business-critical flows.
  • Playwright creates an isolated browser context for each test, which prevents state from leaking between tests and causing flakiness.
  • Testing frameworks like Playwright create a real browser input pipeline that can perform actions that are indistinguishable from those of an actual user.

11. Knowing the Playwright Test Automation Features

One of the biggest lessons I learned after 500+ Playwright tests is that you don't need to reinvent the wheel. Playwright's core library is powerful, but there are tools in the ecosystem that make the experience even smoother. Whether you need test data management, CI/CD optimization, or visualization, Playwright has the tools you need.

  • The Playwright Inspector is a GUI (Graphical User Interface) tool that lets you record user actions, explore locators, generate test cases, and step through tests to see what is happening in the browser.
  • The Test Explorer view provides a central panel for navigating test files and seeing pass/fail results.

12. Leveraging AI to Enhance Test Runs

The other eye-opener has been the role of AI in test automation within Playwright test cases. When I first heard about "AI in testing," I assumed it meant flaky auto-generated tests. But in practice, AI in testing helps in more grounded ways, like:

  • Spotting patterns in flaky tests you might miss
  • Suggesting optimizations for slow test execution
  • Generating starter test scenarios for new features

Currents recently released an MCP server that integrates with AI-powered assistants. This means you can query your test results, test reports, etc., conversationally. You can ask it things like "show me all the flaky login tests in Chrome over the last two weeks" instead of manually digging. Though it doesn't replace thoughtful test code design, it does, however, remove a lot of grunt work. That frees you to focus on writing independent tests and crafting reliable test suites, while letting AI surface the insights you'd otherwise miss.

13. Tests in Headed: Embrace Headed Mode and Execution Traces When Debugging

When I first started debugging failed tests, I got stuck with logs and screenshots. But some issues only appeared when running tests in headed mode, where you see the actual browser window in action.

For instance, I once had a checkout flow test that kept timing out. Running in headless mode didn't reveal the issue, but in headed mode, I immediately saw a dynamic control (a loading spinner) that was blocking the "Pay" button. Advice: Use the headed mode for quick debugging, and execution traces for deeper investigation. Together, they'll save you hours of guesswork.

14. Monitor and Evolve Your Test Suite

It's tempting to think of your test suite as "done" once you've covered your core flows. But after running hundreds of end-to-end tests, I learned that Flaky tests creep in as your app evolves. Although the Playwright testing framework uses the real browser input pipeline, it ensures that automated actions like clicks and keyboard presses are indistinguishable from those performed by an actual user.

  • Repetitive login operations slow down your test execution.
  • New test scenarios emerge as features span multiple browsers or multiple tabs.

In my case, Currents helped here by making it easier to explore execution logs and track how all the tests were behaving across releases. By monitoring, I could spot patterns like "login-related failures increasing after the last auth change" or "test run times creeping up."

To Wrap Up

Playwright offers a wide range of tools and suites to make testing and debugging easier. Flaky tests are part of every automation engineer's story when building modern web apps, even if it is a basic test. What matters is whether you create habits that minimize them: isolation, async discipline, smart data handling, and good debugging practices. These 14 lessons took me months of trial and error to learn, but I wrote this so that you won't need to repeat these mistakes. Combining everything you have learned here (with lessons of your own), you basically cover the full spectrum of scaling Playwright: from fundamentals like test isolation and parallel test execution to advanced workflows that make your dynamic web testing future-proof.

My final piece of advice: Don't run Playwright tests blind. If you're scaling beyond a few dozen tests, visibility is essential. That's where a platform like Currents helps, by surfacing flaky tests, making execution logs easy to explore, and showing trends in test health. The sooner you get control over your test execution and web testing, the sooner you'll trust your tests again. And trust is the real outcome every QA engineer is after.


Scale your Playwright tests with confidence.
Join hundreds of teams using Currents.
Learn More

Trademarks and logos mentioned in this text belong to their respective owners.

Related Posts