Asjad Khan
Asjad Khan

How To Adopt Playwright the Right Way

Thinking about adopting Playwright but don't know where to start? Here's everything you need to be aware about before getting started.

How To Adopt Playwright the Right Way

Playwright is undoubtedly gaining popularity. Teams are migrating their test suites from Cypress and Selenium, and they are choosing Playwright as their default framework for new projects.

The reasons are simple; it's fast, handles multiple browsers natively, works across web and mobile, and the developer experience is great.

If you're considering Playwright for the first time, you might be wondering where to start. Here's a heads-up: adoption doesn't have to be complex. You don't need to have everything figured out on day one. Most successful teams start small, learn as they go, and build their testing practice iteratively.

This guide covers what you need to think about as you adopt Playwright; from building your initial case and setting up your first tests, to scaling your suite and integrating with CI. Whether you're starting from scratch or have some testing experience with other tools, you'll find practical guidance on the decisions that are important for the long run.

1. Build a Strong Business Case

Adopting Playwright, or any framework, requires investment. Your team needs training time. You may need to rewrite existing tests, set up environments, and integrate CI. Before you start, know why you're doing this and what outcomes your team will measure to track success.

The investment makes sense when you can show significant improvement in the following areas:

  • Mean Time to Repair (MTTR): How fast can you identify and fix test failures? Playwright's tracing and video capture make this process faster than traditional frameworks, providing a visual, step-by-step record of why the test failed.
  • Lead time: Check how quickly tests are running. Parallel execution and intelligent orchestration matter here.
  • Confidence: Can your team trust the test suite? Flakiness erodes confidence. When developers see tests fail for no clear reason, they start ignoring failures entirely. Playwright's auto-waiting reduces common flakiness causes by waiting for elements to be ready before interacting with them, rather than relying on hardcoded timeouts.

These metrics will help you build a case to showing how switching to Playwright benefits your team.

Create a one-pager for your stakeholders that outlines baseline metrics, expected improvements, the timeline, and the investment required. This will guide your team better when adoption becomes challenging.

2. Get Stakeholder Buy-In

Before moving to the technical work, get alignment across teams. Test automation involves the whole technical team. Engineering owns the code, QA owns the test strategy, Ops manages the CI costs, and the Product teams care about the test coverage. Therefore, aligning them becomes important to avoid friction later.

  • Discuss each team's current pain points: The engineering team may be dealing with slow CI feedback, QA may be spending a lot of time fixing flaky tests, and so on. Mapping these problems and explaining how Playwright solves them makes it easier for teams to understand the impact of the change.
  • Define clear ownership: Clear ownership means clear accountability. Ensure there is clarity on who writes tests, who maintains the framework and CI setup, who approves major changes, etc.
  • Set a review cadence: Quarterly check-ins to review metrics such as test runtime, flake rate, test coverage. The regular checkpoints keep Playwright aligned with the team and catch issues before they become blockers.

3. Set up a Pilot Program

Everyone wants to achieve big things quickly. So, teams often think that rewriting the entire suite and migrating the whole team at once will do, but this almost always fails. Run a focused pilot project instead.

  • Pick up a narrow scope: Choose one critical user flow and the associated team. It might be your checkout flow, authentication flow, or your onboarding sequence. Notice the idea here? It's starting from something important enough that success matters, but small enough that failure doesn't mess up everything.
  • Set up an exit criterion before starting: Target success metrics. For example, tests run in under two minutes, the flake rate is below 2%, and the team completes onboarding within a week, etc. Be specific with your goal.
  • Run the pilot for 2–4 weeks: To assess problems (if any) and solve them without losing momentum.

Once you've run the pilot, you'll have an idea of:

  • How long migration actually takes based on the effort required for one real flow
  • Where your test data breaks down, since the pilot exposes state issues and inconsistencies early
  • What your CI setup needs to look like, given the pilot validates runtime, flake rate, and resource usage
  • What training actually works, based on which aspects the team struggled with most

This information will then lead to a full rollout.

Now, as you expand, migrate one more critical flow, and include another team. This gives you time to fix problems before rolling it out organization-wide.

4. Train Your Team

Team familiarity with Playwright improves test quality. This means training becomes absolutely necessary.

  • Align the team on Playwright's core principles. Show how Playwright handles auto-waiting, locators, and test isolation. Run through examples together so everyone understands the shift from their previous framework.
  • Start with a single test file that checks one particular page of your app and grow iteratively. You don't need to define everything upfront. Let standards emerge as your team writes tests and encounters patterns. Use eslint-plugin-playwright from the start to catch common mistakes and establish good defaults automatically.
// Example folder structure
tests/
  ├── auth/
  │   ├── login.spec.ts
  │   ├── logout.spec.ts
  │   └── pages/
  │       └── login-page.ts
  ├── checkout/
  │   ├── add-to-cart.spec.ts
  │   ├── apply-coupon.spec.ts
  │   └── pages/
  │       └── checkout-page.ts
  └── fixtures/
      ├── db.ts
      └── auth.ts
  • Have experienced developers sit with new team members while they write their first few tests. This catches misunderstandings early and builds confidence.
  • Set aside time weekly for questions, as people will have them. Answering them quickly avoids any future blockers.
  • Once the team has spent considerable time writing Playwright tests, try creating documentation for a reusable examples library for common patterns, such as how to handle login, seed test data, and wait for network requests.

5. Define Your Test Strategy and Architecture

Now that your team is trained, you need a clear strategy for what to test and how to structure those tests. Everything shouldn't be tested end-to-end, because if you do so, you might end up with slow, flaky, and expensive test suites. Defining a clear strategy from the start determines how quickly and reliably your tests run.

  • Follow the test pyramid: At the bottom: unit tests (fast, isolated, and most of the tests should lie here). In the middle: integration tests (test components working together). At the top: end-to-end tests (test full user flows, fewer of them). Sometimes, teams invert this pyramid and later wonder why their CI is slow and their tests are flaky.
  • Once you decide what belongs in end-to-end coverage, the next concern is how those tests are written. Starting with straightforward, inline tests helps teams understand Playwright's behavior before introducing additional structure.
// login.spec.ts - Start with the simplest approach
test('user can log in', async ({ page }) => {
  await page.goto('/login');
  await page.getByLabel('Email').fill('user@example.com');
  await page.getByLabel('Password').fill('password');
  await page.getByRole('button', { name: 'Sign in' }).click();
  await expect(page).toHaveURL('/dashboard');
});
  • Use the Page Object Model (POM) when it works best: For complex applications with many pages and repeated interactions, organizing selectors into page objects reduces duplication and makes tests easier to maintain. However, Playwright's features, such as locators, auto-waiting, and fixtures, often eliminate the need for POM in simpler suites. Start with Playwright's native patterns and introduce POM only when complexity necessitates it.
// login-page.ts
export class LoginPage {
  readonly page: Page;
  readonly emailInput: Locator;
  readonly passwordInput: Locator;
  readonly submitButton: Locator;

  constructor(page: Page) {
    this.page = page;
    this.emailInput = page.getByLabel('Email');
    this.passwordInput = page.getByLabel('Password');
    this.submitButton = page.getByRole('button', { name: 'Sign in' });
  }

  async goto() {
    await this.page.goto('/login');
  }

  async login(email: string, password: string) {
    await this.emailInput.fill(email);
    await this.passwordInput.fill(password);
    await this.submitButton.click();
  }
}

// login.spec.ts
test('user can log in', async ({ page }) => {
  const loginPage = new LoginPage(page);
  await loginPage.goto();
  await loginPage.login('user@example.com', 'password');
  await expect(page).toHaveURL('/dashboard');
});

This keeps tests readable, and changes to the UI don't break multiple tests.

  • Use fixtures for setup: Playwright fixtures are shared objects available across tests, with automatic setup and teardown. Test-scoped fixtures reset their state before every test, ensuring isolation.
// fixtures/auth.ts
export const test = base.extend<{ authenticatedPage: Page }>({
  authenticatedPage: async ({ page }, use) => {
    // Log in before the test
    await page.goto('/login');
    await page.getByLabel('Email').fill('user@example.com');
    await page.getByLabel('Password').fill('password');
    await page.getByRole('button', { name: 'Sign in' }).click();
    await page.waitForURL('/dashboard');
    
    // Use the logged-in page in tests
    await use(page);
  },
});

// In your test file - just add authenticatedPage to test params
test('user can view dashboard', async ({ authenticatedPage }) => {
  // authenticatedPage is already logged in - no manual setup needed
  await expect(authenticatedPage.getByRole('heading', { name: 'Dashboard' })).toBeVisible();
});

(The test remains clean. By adding authenticatedPage to the test parameters, Playwright instantiates the authenticated state automatically. This keeps setup logic out of test bodies and avoids repetition.)

  • Use stable selectors. Avoid brittle selectors, such as nth-child or position-based selectors, and use role-based selectors (getByRole), label-based selectors(getByLabel), and test IDs (getByTestId) instead.

6. Control Your Environment and Test Data

Teams usually struggle here. If your test environment is unstable or your test data is unpredictable, your tests will likely be unreliable as well.

  • Separate environments: Local (your machine), Review/Preview (temporary environments for each PR), Staging (production-like), Production (never test here). Tests run in different environments and need different data strategies.
  • Check for test data consistency: Tests should consistently produce the same results. Pulling up random data from a database also leads to flakes. Instead, seed your data.
// Seed a user before running tests
let testUser;

test.beforeEach(async ({ request }) => {
  testUser = {
    id: 'test-user-' + Date.now(),
    email: `test-${Date.now()}@example.com`,
    name: 'Test User',
    role: 'admin',
  };
  
  await request.post('http://localhost:3001/api/admin/users', {
    data: testUser,
  });
});

// Now use the seeded user in tests
test('displays user name after login', async ({ page }) => {
  await page.goto('/login');
  await page.getByLabel('Email').fill(testUser.email);
  await page.getByLabel('Password').fill('password');
  await page.getByRole('button', { name: 'Sign in' }).click();
  
  // Verify the seeded user's name appears
  await expect(page.getByText(testUser.name)).toBeVisible();
});
  • Isolate tests from each other: Playwright provides excellent test isolation out of the box, each test gets a fresh browser context. However, data conflicts can still occur. One test's data shouldn't affect another test. Use unique identifiers for test data and clean up after tests when necessary.
  • Handle your database carefully: Reset it before each test run (slow but simple), use transactions that roll back after each test (faster), spin up ephemeral databases in containers (more complex but most reliable). Choose based on your constraints.
  • Manage secrets properly: Test credentials, API keys, and database passwords need to be secure. Use environment variables instead of hardcoded strings. Use separate credentials for testing than for production.

7. Build for Reliability From the Start

If you're considering adopting, make sure that your suite is reliable.

  • Look for reasons that are causing flaky tests: Typically, these issues include race conditions (tests attempt to click before the element is ready), timing assumptions (tests wait too long or not long enough), environment issues (data inconsistency, network timeouts), or test design (over-reliance on timing).
  • Use Playwright's auto-waiting properly: Playwright waits for elements to be visible before interacting with them. This fixes a lot of flakes automatically.
// Playwright waits for the button to be visible and stable
await page.getByRole('button', { name: 'Submit' }).click();
  • Timeouts: Playwright's default test timeout is 30 seconds. Usually, that's fine. But long-running operations need explicit waits. Prefer Playwright's auto-waiting APIs ( like expect().toBeVisible()) over fixed delays. Don't just increase the timeout globally, as it might hide actual problems.
  • Run tests in parallel: Parallel tests are fast but introduce complexity. Each test needs isolated data. If you have 100 tests and run 10 in parallel, design your environment to handle that.

8. Handle Authentication Without Logging In on Every Test

Authentication is one of the most common setups in E2E (End-To-End) tests, and how you handle it affects test speed. Writing tests that log in every single time are slow and fragile. But you still need to verify that authentication works; therefore, balancing the login tests with faster options helps.

  • Reuse sessions when possible: Log in once, save the authenticated state (cookies, local storage), and reuse it for subsequent tests. This is much faster than logging in for every test.

    However, two caveats are worth considering: the storage state can become stale if backends invalidate sessions, and reusing a single state across parallel workers can cause collisions in apps that don't handle concurrent sessions well. Monitor for these issues and regenerate state as needed.

// Save authenticated state
test.beforeAll(async ({ browser }) => {
  const context = await browser.newContext();
  const page = context.newPage();
  
  await page.goto('http://localhost:3000/login');
  await page.getByLabel('Email').fill('test@example.com');
  await page.getByLabel('Password').fill('password');
  await page.getByRole('button', { name: 'Sign in' }).click();
  
  // Save the authenticated state
  await context.storageState({ path: 'auth.json' });
  await context.close();
});

// Reuse it in tests
test('user can view dashboard', async ({ browser }) => {
  const context = await browser.newContext({ storageState: 'auth.json' });
  const page = context.newPage();
  
  await page.goto('http://localhost:3000/dashboard');
  // Already authenticated, no login needed
});
  • Test the real login flows still: Write isolated tests specifically for login, password reset, and logout. Not every test should skip login as they verify that authentication itself works correctly. These shouldn't be prerequisites for other tests.
  • Handle SSO, OAuth, and 2FA with care and planning: If your app uses single sign-on or multi-factor authentication, you cannot simply fill in a username and password. Mock the OAuth provider for tests. Use a test user without two-factor authentication (2FA) and generate authentication tokens via backend API calls to bypass the login UI for most tests.

9. Decide When to Mock, Stub, or Hit Real Services

Beyond authentication, you'll need to decide how to handle external dependencies. Tests that hit real external services are slow and unreliable, whereas those that mock everything are fast but may have integration issues. So, finding what fits where is also important.

  • Hit real services for critical flows, like the checkout process, while using a sandbox account and test environment.
  • Mock or stub selectively: Stub third-party APIs that your app depends on but aren't core to what you're testing, like analytics services, monitoring tools, or external integrations that don't affect core business logic.
// Mock an API response and verify the mocked data appears on the page
test('user sees personalized greeting', async ({ page }) => {
  // Mock the user profile API
  await page.route('**/api/user/profile', route => {
    route.fulfill({
      status: 200,
      contentType: 'application/json',
      body: JSON.stringify({
        name: 'Test User',
        email: 'test@example.com',
        plan: 'Premium'
      })
    });
  });
  
  await page.goto('http://localhost:3000/dashboard');
  
  // Verify the mocked data appears on the page
  await expect(page.getByText('Welcome back, Test User')).toBeVisible();
  await expect(page.getByText('Premium Plan')).toBeVisible();

});
  • Use contract tests for verification: If you're stubbing a third-party API, verify separately (with a contract test or integration test) that your stub accurately reflects how the real API works. Keep in mind that contract tests validate the interface but don't guarantee end-to-end correctness with the real service. Periodically test against the actual service to catch any flaw.
  • Rate limits and flaky vendors: Some services are genuinely flaky. If you depend on them, either mock them or build retry logic with exponential backoff.

10. Set Up CI/CD Integration and Orchestration

We now have reliable tests running locally. What's next? Integrating them into your CI/CD pipeline. CI integration is where Playwright meets your development workflow. If you get this right, the tests naturally become an integral part of your pipeline.

Choose your CI strategy: Check how you are running the tests; it may be either on every commit, only on pull requests, or on a schedule. Each has its own pros and cons. For example, running all the tests takes time, and skipping tests may leave unaddressed bugs. Running all tests on every commit isn't feasible for medium- to large-sized teams. Most teams use a hybrid approach, where smoke tests run on every PR, a full suite runs on merge to main, and long regression suites run nightly or on a schedule.

Set up testing in different browsers: Run your tests across Chrome, Firefox, and Safari to catch browser-specific issues. Configure this in your Playwright config.

// playwright.config.ts
export default defineConfig({
  projects: [
    { name: 'chromium', use: { ...devices['Desktop Chrome'] } },
    { name: 'firefox', use: { ...devices['Desktop Firefox'] } },
    { name: 'webkit', use: { ...devices['Desktop Safari'] } },
  ],
});

Set up GitHub Actions to run your tests: Use GitHub Actions workflows to run tests automatically on every PR:

# Example: GitHub Actions workflow
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm install
      - run: npx playwright test
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: playwright-report
          path: playwright-report/

Use sharding for faster runs: For large test suites, split tests across multiple machines using Playwright's built-in sharding:

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        shard: [1, 2, 3, 4]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm install
      - run: npx playwright test --shard=${{ matrix.shard }}/4

Setup reporting: Track test results, artifacts, and trends over time. Tools like Currents automatically capture and organize traces, videos, and logs from your CI runs. When a test fails, you immediately see the trace, video, and error details in one place, implying that you don't have to look through CI logs or dealing with expired artifacts. Currents also provides dashboards for monitoring flake rates, test duration, and pass rates across your entire suite.

Manage artifacts thoughtfully: Videos, traces, and screenshots consume storage space. Keep failed test artifacts for 1–4 weeks (or longer if using long-term reporting tools). Keep passing artifacts for a few days. Set retention policies to avoid storage cost complications.

Proper orchestration reduces costs and feedback time: If you're running 200 tests, sharding across 10 machines is much faster than running them sequentially. However, you need to maintain a balance: more machines result in higher costs, while fewer machines lead to slower feedback. Currents handle this orchestration automatically, distributing tests optimally based on their runtime.

Approve merges on test results: If the tests are failing, then block that particular PR. This prevents bad code from reaching the main function. However, ensure that your CI is reliable enough so that failures indicate genuine issues.

11. Plan a Structured Migration from Cypress or Selenium

If you're moving from an existing testing framework, migration requires careful planning.

Most teams take a gradual approach by writing new tests in Playwright while keeping existing tests running. This lets you migrate step-by-step without blocking development or losing test coverage.

Migrating from Cypress?

We've written a comprehensive guide covering common migration patterns, pitfalls to avoid, and strategies for running both frameworks side-by-side during transition. Read the full guide: Cypress to Playwright Migration.

Migrating from Selenium?

The key challenges differ slightly. Selenium teams often carry over explicit waits and brittle XPath selectors. Playwright's auto-waiting and semantic locators (like getByRole and getByLabel) eliminate most of these issues, but the mental model shift takes time.

General Migration Advice

  • Start with one critical user flow as a pilot.
  • Prioritize high-value tests (checkout, authentication, core features)
  • Run both frameworks in CI during migration for safety
  • Migrate gradually, like, new features get Playwright tests, old tests migrate as you use that code

The key thing to note is that you don't have to migrate the entire suite in one-go. It's a steady process towards creating a more maintainable test suite.

12. Organize Your Test Suite

As your test suite grows, organizing it becomes non-negotiable. How you organize and structure your tests also determines whether your suite is scalable or not. Poor organization creates issues during maintenance.

Use consistent folder structures: Group your tests by feature or by layer. Feature-based organization (auth, checkout, dashboard) helps teammates find tests quickly. Layer based organization groups test by type, helping teams maintain different quality standards, speeds, and CI pipelines for each testing layer. Within each feature, separate page objects, fixtures, and utilities into their own folders for clarity.

Proper naming of the tests: Test files should describe what they're testing, for example, login-with-credentials.spec.ts, not test1.spec.ts.

13. Track the Right Quality Metrics

You can only improve what you measure; therefore, tracking the right metrics becomes essential.

  • Pass rate along with test count: A 95% pass rate with 1,000 tests is different from a 95% pass rate with 100 tests.
  • Pass rate per project or browser: Track pass rates separately for each browser (Chrome, Firefox, Safari) and project if you're running multiple configurations. Browser-specific failures often indicate compatibility issues.
  • Monitor the flake rate: The percentage of tests that fail randomly without any changes in the code. If it's above 2–3%, there is a problem in the suite to be addressed.
  • Test duration and feedback loop: For validating a PR, tests should run in minutes to provide fast feedback. Larger suites with thousands of tests use sharding to keep individual runs quick. Long-running end-to-end regression suites are fine for scheduled runs (nightly, weekly) but shouldn't block PRs. If your PR tests take longer than expected even with sharding, revisit your test selection or infrastructure.
  • Coverage of critical user flows: Verify that your tests cover the key user journeys that are most important to your business. Unlike code coverage (which measures lines executed), scenario coverage tracks whether important workflows are tested. A test suite with high code coverage but missing critical checkout or authentication flows provides false confidence.
  • Track bugs that escape to production: Ensure your tests catch any issues before reaching the users. When bugs do escape, categorize them by root cause: was it a missing test, an ignored flaky test, a gap in test data, or an environment difference? This helps you improve your test strategy systematically.
  • Review metrics weekly: This will help you identify trends. If you see a rise in the flake rate, immediate intervention is required.

Currents gives you visibility into these metrics across all your test runs, in one place. More importantly, when a test fails, you have the trace and video captured automatically, saving you from having to juggle between CI logs to understand what happened.

14. Plan for Ongoing Maintenance

Adoption is one part; the other is maintenance. Unmaintained tests become liabilities; they slow down CI, produce false failures, and weaken team confidence.

  • Assign clear ownership: Having a clear ownership and structure for test frameworks, flaky test checking, test code reviews, etc., helps in the proper structuring of the test suite and simplifies maintenance.
  • Treat tests the same way you treat production code. They need review and refactoring.
  • Upgrade Playwright regularly: New versions improve the environment and address bugs. Staying up to date with the versions is easier and better than upgrading after six months of changes.
  • Playwright stays in sync with browser changes, but sometimes, these changes can break tests. Use Playwright's browser version pinning in your suite to control when browser updates happen. Test with new browser versions in a separate environment before implementing them out to your full suite.
  • A test that's flaky, slow, and never catches anything only creates complications, but deletion shouldn't be automatic. Before removing tests, check whether they protect actual user behavior or product risks. Some flaky tests reveal real application issues like race conditions, timing problems, or environment instabilities rather than just poor test quality. Investigate the root cause first. If a test is truly redundant or doesn't align with real user flows or business risks, consolidate or remove it. Retain tests that validate critical functionality, even if they need refactoring to improve stability.

15. Avoid Common Anti-Patterns

As you build and maintain your suite, be aware of these common mistakes. Teams make similar mistakes repeatedly. Knowing them helps you avoid them.

  • Over-mocking: If you mock every external service, your tests may pass, but the actual app may fail to function properly. Strategically mocking things instead of universally is the key.
  • Balance splitting the tests: Tests that cover an entire user journey in a single test are hard to debug and fail for multiple reasons. However, splitting every flow into micro-tests that don't accurately represent real user behavior isn't better either. So, find the middle ground by testing meaningful workflows (such as login, adding to cart, and checkout) as cohesive flows, but keep them focused enough that failures pinpoint specific issues.
  • Brittle selectors make maintenance tough: If you're updating selectors every time design changes, something is wrong. Prefer semantic selectors (getByRole, getByLabel) when possible. When semantic markup isn't available, use test IDs (data-testid) rather than tightly coupled CSS selectors that break with layout changes. Work with your frontend team to make your app testable.
  • Not testing error cases: Test what happens when things break; error messages and edge cases. That's where bugs hide.

The practices outlined above cover the major areas of adoption, from strategy to execution.

However, teams inevitably encounter specific technical questions while setting up Playwright. The FAQ section below addresses the most frequently asked questions that arise during initial setup and early implementation.

Frequently Asked Questions

Architecture and Test Design


Authentication


Flakiness, Timeouts, and Debugging


Tools and AI Usage

Next Steps

Adopting Playwright doesn't mean rewriting everything at once. Successful teams start with a pilot, set clear goals, and scale based on what they learn.

Setting things up correctly from the beginning with a strong strategy, proper environment control, and proactivity to address flaky tests enables you to reap the benefits that Playwright offers, such as faster test execution, more reliable results, and a better developer experience.

If you're planning to adopt Playwright, explore Currents to run tests faster with smart orchestration and debug failures with comprehensive dashboards.


Scale your Playwright tests with confidence.
Join hundreds of teams using Currents.
Learn More

Trademarks and logos mentioned in this text belong to their respective owners.

Related Posts