How To Speed Up Playwright Tests: 7 Tips From Experts
As Playwright suites grow, runtimes climb and CI slows. Learn strategies on how to speed up Playwright tests and reduce delays in your workflow.

Your pipeline is green, but the pull request sits idle because when you review the logs, you notice the test suite took 47 minutes to run. During this time, your team experienced minor delays, engineers waited for feedback, and project timelines shifted slightly as work slows down.
This situation is common for many teams using Playwright. As the test suite grows, execution times naturally increase. A run that normally takes around five minutes can gradually stretch much longer, raising CI costs, slowing reviews, and affecting workflow efficiency.
Even though Playwright is designed to be fast, performance can decline when specific patterns build up. Common factors that increase runtimes include:
- Unnecessary hard waits
- Sequential execution that could run in parallel
- Underused resources
- Repeated authentication
At Currents, we analyze Playwright performance across thousands of pipelines. Some adjustments can reduce runtime noticeably, while others may unintentionally increase it. This guide shares practical techniques to improve Playwright test performance and speed up your workflows.
7 Expert Ways to Speed Up Playwright Tests
Here are seven practical ways to speed up your Playwright tests, streamline your CI workflows, and ensure your team spends less time waiting and more time shipping features.
1. Use Parallelism Effectively
On CI, some CPU cores stay idle when only one test is running. Playwright Test runs test files in parallel across multiple workers, each worker starts its own browser instance, and each test executes in a fresh browser context to ensure isolation. Locally, one worker per CPU core maximizes concurrency, but on some CI providers, the default might be a single worker, so tests run sequentially.
For example, a 100-test suite with 30-second tests usually takes about 50 minutes on a single worker. Increasing the number of workers to four cuts the total runtime to around 12–15 minutes, assuming the CI machine has enough CPU and memory to support that level of parallelism without contention.
You can set the number of workers in your configuration file as follows:
export default defineConfig({
workers: process.env.CI ? 4 : undefined,
});
Before increasing workers, it’s important to consider resource limits. Each worker launches a full browser instance, and pushing concurrency too far can lead to CPU contention, memory pressure, or even out-of-memory failures. In practice, parallelism should be sized based on available memory and CPU, especially when running multiple browser projects.
If you want Playwright to parallelize tests at the individual test level, enable fullyParallel in your configuration:
export default defineConfig({
fullyParallel: true,
});
This allows tests in a single file to run concurrently rather than serially, reducing runtime for larger test files. However, fullyParallel assumes strict test isolation. Tests that share fixtures, rely on execution order, or mutate shared state can become flaky when run in parallel, so they should be enabled only when tests are written to run independently.
For larger test suites, sharding distributes tests across multiple machines, each handling a portion of the suite. This complements fullyParallel by scaling concurrency beyond a single worker pool, allowing even larger suites to complete faster without overloading a single machine. Workers and sharding solve different scaling problems, and understanding when to use each is key to balancing concurrency and optimizing test execution.
Here's an example of running shards in Playwright:
npx playwright test --shard=1/4
npx playwright test --shard=2/4
npx playwright test --shard=3/4
npx playwright test --shard=4/4
With sharding, each shard runs independently with its own workers. It is important to balance shards carefully, as too many on less powerful machines can cause resource contention. As your suite grows, sharding amplifies concurrency gains and can significantly reduce total runtime.
In practice, this means sharding builds on worker-level parallelism by distributing tests across multiple machines. For instance, splitting the same suite into four shards, each running its own worker pool, can reduce the total runtime to 7–10 minutes, assuming the infrastructure can sustain the additional CPU and memory load.
For bigger test suites, sharding can still not fully solve slow pipelines, as its algorithm is not as efficient as it could be. Sharding splits test files based on the lexical order of file paths, leading to unbalanced workloads.
For that reason, Currents developed a Smart Test Orchestrator, which fixes that by balancing the load using historical execution data, continuously creating the most optimal execution order for test runs, resulting in up to 40% reduction the CI execution time with minimal changes to configuration.
2. Avoid Unnecessary Page Reloads
Tests that reload pages take longer to run. Minimizing reloads helps avoid repeated authentication, extra navigation, and costly page transitions inside loops or shared test flows. Every page.goto() triggers a complete network round trip, including DNS lookup, executing JavaScript, and rendering.
Consider authentication tests. Most suites log in before every test. This involves navigating to the login page, submitting the form, and redirecting to the dashboard, which amounts to three page loads just for setup. Reloads also reset the browser context, clearing cookies and local storage. While starting fresh can be useful for isolation, it also increases runtime because each test must rebuild state from scratch.
Playwright’s strength lies in its auto-waiting and resilient locator model. Using stable locators (roles, test IDs) and relying on condition-based assertions typically yields larger performance gains than simply minimizing navigation.
To reduce unnecessary page loads, move shared navigation into beforeEach:
test.beforeEach(async ({ page, context }) => {
await context.addCookies(preAuthCookies);
await page.goto("https://example.com/dashboard");
});
Note: Each test still triggers a minimal goto(), but this avoids repeating expensive operations such as full login flows or multi-step navigation within the test body.
For setups that don't require per-test isolation, you can also use beforeAll, but only for tasks like seeding data, loading fixtures, or preparing configuration, not for sharing a browser page instance.
test.beforeAll(async () => {
await seedTestData();
});
When navigation is required, you can also optimize how the page loads. Playwright's waitUntil option controls when a navigation is considered complete. By default, it waits for the full load event, which means images, fonts, and all resources must be downloaded. If only the Document Object Model (DOM) is needed, switching to domcontentloaded speeds things up:
await page.goto("https://example.com", {
waitUntil: "domcontentloaded",
});
Small differences in loading behavior matter. Tests can begin interacting with the DOM earlier, even while secondary resources are still loading. This reduces idle time without sacrificing reliability. It’s also important to avoid unnecessary navigation during test interactions. For example, clicking a link that navigates away and then returns creates an extra round trip. When possible, test the behavior directly instead of taking the full navigation path each time.
When using Single Page Applications (SPAs), Playwright handles component re-rendering without full page reloads, so waiting for meaningful state changes (like a URL update or a specific element appearing) is more important than minimizing page.goto() calls:
await page.click('text="Next Step"');
await page.waitForURL("**/step-2");
Minimizing page reloads and optimizing navigation, in both multi-page apps and SPAs, makes your test suite more efficient.
3. Run Tests Headless in CI
Headed browsers render windows and process visual updates, while headless browsers skip these steps. This reduces CPU and GPU overhead, making headless mode slightly 10–30% faster than headed mode, depending on the test and environment.
Headless mode is especially useful in CI environments because it runs consistently without requiring virtual displays, simplifies setup, and prevents rendering issues. Playwright runs in headless mode by default in CI. If your project has overridden the default headless setting, you can switch it back to headless mode:
export default defineConfig({
use: {
headless: true,
},
});
Or you can configure it per environment, keeping headed mode locally for debugging:
export default defineConfig({
use: {
headless: process.env.CI ? true : false,
},
});
Headless mode often uses less memory than headed mode, though actual usage depends on the browser, enabled features (such as video or tracing), and the underlying CI machine. Running multiple workers in headless mode typically consumes fewer resources overall, thereby improving CI performance and stability.
Modern browsers render very similarly in both modes. Chromium, Firefox, and WebKit all maintain parity in functional testing, and differences appear only in GPU-accelerated animations or Canvas operations. Debugging headless test failures used to be more challenging, but Playwright now provides tools to help. Trace viewer, screenshots, and video capture all work in both headed and headless modes.
Tests that require visual verification, such as screenshot comparisons or animation timing, may benefit from headed mode. With proper configuration, including fixed viewport sizes and deviceScaleFactor, many visual tests can still run reliably in headless mode. Decide whether a test belongs in a fast functional suite or a slower visual regression suite.
4. Fail-Fast Strategy
The first test fails, and the remaining 199 continue running, extending the overall suite runtime. For example, a single failing login step could cause every checkout test to run unnecessarily. This is why fail-fast is especially valuable in pull request pipelines, where quick feedback is critical. The build reports a failure after the initial test.
Fail-fast stops execution when failures are detected. Once a test fails, the remaining tests are skipped, providing immediate feedback without waiting for the full suite to finish. Playwright implements this behavior through the maxFailures configuration, which stops the run after a specified number of failures:
export default defineConfig({
maxFailures: process.env.CI ? 10 : undefined,
});
Or via command line:
npx playwright test --max-failures=10
The exact number depends on your risk tolerance. Some teams set it to 1, stopping after the first failure, while others use 5–10 to identify multiple issues before halting.
In Currents, the fail-fast strategy extends this behavior across shards and machines, so when a test fails on any shard, the run is marked as cancelled. New test requests fail, preventing workers from picking up additional tests. Without fail-fast across shards, one machine may continue running tests while another has already detected failures, which delays feedback.
To enable this in Currents, configure it via the dashboard or use CLI flags:
npx pwc --pwc-cancel-after-failures 1
There is a tradeoff in visibility when using fail-fast. While it reduces wasted compute by stopping tests after the first failure, it also hides additional failures, so you may need to fix and rerun to uncover other issues. Fail-fast can be risky with flaky tests, as a transient failure may cause the suite to stop prematurely, giving a misleading sense of stability. Address flakiness before enabling fail-fast to avoid repeated false alarms. These limitations are especially important for unstable or growing test suites.
For stable suites, fail-fast helps detect breaking changes quickly, giving developers faster feedback and saving CI resources. Teams often run fail-fast in pull request pipelines for quick validation, while full suites run in nightly builds. It is not a complete solution but a tool for stable suites where early failures indicate systemic issues. Use it carefully to balance speed and visibility.
5. Run Only Changed Tests
Running every test on every commit can slow feedback. Updating a checkout component shouldn’t trigger 400 unrelated dashboard tests. Most code changes affect a small subset of tests, such as a pricing page update that doesn’t impact login tests. To address this, test-impact analysis identifies which tests need to run based on changed files. Playwright v1.46 supports this via the --only-changed flag.
Run it like this:
npx playwright test --only-changed
By default, this compares against HEAD and runs any tests that have changed since your last commit, including tests that statistically import changed files. For example, updating a utility function automatically runs all tests that use it. This makes the feature useful for narrowing down affected tests during active development.
For feature branches, you can compare against your base branch:
npx playwright test --only-changed=main
This runs only tests affected by changes between your branch and main, which is especially useful during pull request reviews. A 500-test suite might only need 50 tests for a focused update, significantly reducing runtime.
In GitHub Actions, you can integrate it directly into your workflow:
- name: Run changed tests
run: npx playwright test --only-changed=origin/${{ github.base_ref }}
However, this mechanism relies on static file dependency analysis, and it has limitations. Playwright monitors direct file imports but doesn’t track indirect dependencies, runtime imports, environment-driven behavior, or backend-driven UI changes. Changes to configuration files, environment variables, generated code, or shared services may not trigger related tests.
Many teams assume --only-changed understands coverage or DOM-level impact. It does not; the selection is based purely on file changes and static imports. In complex projects such as monorepos, shared helper libraries, or highly abstracted test setups, this can lead to missed test coverage and a false sense of confidence.
To use it safely, consider a hybrid approach. Run changed tests during pull requests for faster feedback, but schedule full suites on merges or nightly builds to catch missed dependencies. Understanding these limitations helps teams balance speed with test coverage and avoid the pitfalls of overconfidence.
6. Persistent Authentication
Authentication takes time. To log in, tests must navigate to the login page, enter credentials, submit the form, wait for the redirect, and confirm the login. This process typically takes 5–15 seconds per test. Across a 100-test suite, authentication alone can consume 8–25 minutes. Since authentication must be repeated for every test without persistent state, it becomes a major contributor to suite runtime.
Persistent authentication solves this by using Playwright’s storageState. You log in once and save the browser state, then load that state before each test so they begin in an authenticated session without repeating the login flow. The saved state includes:
- Cookies
- Local storage
- Session tokens
Create a setup file that logs in and saves state:
// auth.setup.ts
import { test as setup } from "@playwright/test";
const authFile = "playwright/.auth/user.json";
setup("authenticate", async ({ page }) => {
await page.goto("https://example.com/login");
await page.fill("#email", "test@example.com");
await page.fill("#password", "password");
await page.click('button[type="submit"]');
await page.waitForURL("https://example.com/dashboard");
await page.context().storageState({ path: authFile });
});
Then configure your tests to use the saved state:
export default defineConfig({
projects: [
{ name: "setup", testMatch: /.*\.setup\.ts/ },
{
name: "chromium",
use: {
...devices["Desktop Chrome"],
storageState: "playwright/.auth/user.json",
},
dependencies: ["setup"],
},
],
});
The dependencies array ensures setup runs first, so tests start already authenticated. This reduces authentication time from 25 minutes to just 10 seconds for a 100-test suite. It also works for multiple user roles. Separate setup files can be created for admin, editor, and viewer, with tests specifying the role they need.
In addition to cookies and local storage, some applications rely on session storage. Session storage is not included in Playwright’s storageState by default. If your application depends on it, you must manually capture and restore it.
To safely capture session storage, extract it key by key:
const sessionStorage = await page.evaluate(() => {
const data: Record<string, string | null> = {};
for (let i = 0; i < sessionStorage.length; i++) {
const key = sessionStorage.key(i);
data[key!] = sessionStorage.getItem(key!);
}
return data;
});
Restore it with addInitScript so it is available before page scripts run:
const sessionStorage = JSON.parse(fs.readFileSync("session.json", "utf-8"));
await page.addInitScript((storage) => {
for (const [key, value] of Object.entries(storage)) {
sessionStorage.setItem(key, value);
}
}, sessionStorage);
This approach should be used with care. Session storage is often populated dynamically after page load or during client-side hydration, and restoring it manually may not fully replicate the application’s authentication flow. If session storage is tightly coupled to runtime logic, API responses, or short-lived tokens, persistent authentication may still require partial revalidation or targeted setup steps.
Authentication state expires as tokens timeout and session cookies clear, so regenerate state files periodically or whenever tests start failing due to authentication errors. While persistent authentication significantly improves speed, it also bypasses login and authorization flows. Many senior teams pair it with a small number of dedicated authentication tests to ensure that login behavior and permission boundaries remain validated.
By using persistent authentication and understanding sessionStorage limitations, tests no longer need repeated login logic and can start directly with the behavior being tested. This makes the test code simpler and easier to maintain.
7. Block Unnecessary Resources
Every page loads dozens of resources, including images, fonts, CSS files, and ad networks, most of which add no value during testing. Resource blocking is a performance optimization that reduces network and rendering overhead. Blocking images, analytics, or third-party scripts is generally safe, but CSS and fonts can affect layout, visibility checks, and text rendering. Some frameworks even lazy-load JavaScript dependent on CSS, making blocking risky.
Playwright's page.route() lets you intercept network requests and decide what loads:
await page.route("**/*", (route) => {
const type = route.request().resourceType();
if (["image", "stylesheet", "font"].includes(type)) {
route.abort();
} else {
route.continue();
}
});
Start conservatively by blocking images and fonts first, then expand gradually. Common third-party resources like analytics, ads, and tracking pixels are generally safe to block and can speed up tests without affecting core behavior:
await page.route("**/*", (route) => {
const url = route.request().url();
const blockedDomains = [
"google-analytics.com",
"googletagmanager.com",
"doubleclick.net",
];
if (blockedDomains.some((domain) => url.includes(domain))) {
route.abort();
} else {
route.continue();
}
});
Some teams go further by blocking all external domains except their own. This works best when configured globally at the project or fixture level rather than inside individual tests, keeping behavior consistent:
await page.route("**/*", (route) => {
const url = route.request().url();
if (!url.includes("yourdomain.com")) {
route.abort();
} else {
route.continue();
}
});
Blocking everything outside your domain can work in isolated environments but risks removing external dependencies required for full functionality. Applications that rely on service workers, client-side caching, or externally hosted scripts may behave differently when requests are blocked, leading to false confidence if not accounted for.
To avoid breakage, pair resource blocking with other performance optimizations, such as persistent authentication and headless mode, and expand what you block incrementally. Applying routing rules at a shared fixture or project level helps avoid inconsistencies between tests.
One important exception is visual testing. Screenshot comparisons need images, fonts, and CSS to render accurately, so resource blocking should be disabled for these scenarios. Many teams solve this by using a separate Playwright project dedicated to visual tests. Even with this separation, maintaining speed still requires ongoing optimization as tests grow and change.
Ongoing Optimization
You implement the tips, tests improve, and the team celebrates. But after a few months, your CI pipeline starts getting clogged.
Why does this happen? Speed can drop when test suites grow without oversight. As new tests are added, slower patterns can spread, and a single test may drift from five seconds to 15 without notice. It highlights one thing: optimization isn’t a one-time task and requires continuous attention.
So, how do teams keep tests fast? They consistently track three key metrics:
- Average test duration: Are tests getting slower over time? If your average test time rises from 12 seconds to 18 seconds over three months, the suite is gaining weight. Individual tests might seem fine, but the overall picture tells a different story.
- Flakiness rate: How often do tests fail randomly? A 5% flakiness rate means one in twenty tests fails without warning, leading to reruns and wasted CI time.
- CI resource usage: How much CPU, memory, and network does each test consume? A test using 200MB today might use 600MB tomorrow. Multiply this by the number of parallel workers, and your pipelines slow down.
Without tracking these metrics, issues accumulate, and teams often notice only when pipelines become inefficient. By then, fixes can be time-consuming and frustrating. Manual tracking with spreadsheets doesn’t scale, making monitoring tools necessary.
Currents helps solve this gap by automatically tracking your key metrics and sending each test run into dashboards, making slow tests and flaky patterns easy to spot as they occur. New outliers are visible before merging, giving teams the chance to optimize early. Currents integrates directly into pull requests, allowing developers to see runtime impacts, quarantine flaky tests, and prevent CI pipelines from building up delays.
Wrap Up
Slow tests disrupt workflow. Developers lose focus during long CI waits, pull requests pile up, and features take longer to ship. A 50-minute suite running five times daily across ten developers consumes 42 hours of CI time. Applying parallelism, headless mode, and fail-fast strategies can cut that significantly, and steady improvements add up over time.
But should you aim for perfect tests? Not really. Flakiness will never hit zero, and tests will occasionally slow down. What matters most is visibility: knowing when performance degrades allows teams to act early, before minor issues turn into chronic slowdowns.
What separates fast teams from the rest is how they treat speed. They see it as a core metric, review test health regularly, and flag regressions before they affect the pipeline. Teams that skip this step often fall into reactive cycles, where pipelines slow, fixes are rushed, and productivity drops.
Instead of trying to overhaul everything at once, pick one improvement, apply it, measure the impact, and repeat. Over time, this disciplined approach keeps test suites fast, reliable, and sustainable.
Join hundreds of teams using Currents.
Trademarks and logos mentioned in this text belong to their respective owners.


