AI Skill: Playwright Best Practices
Empower your AI Agents to write, debug, and maintain Playwright tests with expert knowledge.

AI agents are becoming a core part of how developers build software. They write code, debug issues, and ship features alongside us. But when it comes to testing, generic AI assistance often falls short—agents rely on outdated patterns, miss framework-specific nuances, and produce tests that are flaky by design.
To get the most out of agents, you need to provide them with expertise.
Today, we're releasing the Playwright Best Practices Skill—a skill that gives AI agents the specialized knowledge they need to help you write, debug, and maintain production-ready Playwright tests.
npx skills add https://github.com/currents-dev/playwright-best-practices-skill
What are Agent Skills?
Agent Skills are a new open standard for providing expertise to agents without bloating the context window. Created by Anthropic, skills are now available in all major AI development tools, including Claude Code, Cursor, VS Code, Google Gemini, and more.
At its simplest, a skill is a directory containing a SKILL.md file with metadata and expert knowledge that tells an agent how to perform a task in an opinionated way.
playwright-best-practices-skill/
├── SKILL.md # Instructions + metadata
└── references/ # Topic-specific documentation
├── locators.md
├── assertions-waiting.md
├── debugging.md
└── ...
Skills are progressively disclosed to preserve context. When an agent starts, only the skill's name and description are loaded. When a task matches the skill's purpose, the agent reads the full instructions and pulls in relevant references as needed.
This means the agent gets precise, expert knowledge exactly when it's relevant—without loading everything at once.
Introducing the Playwright Best Practices Skill
The Playwright Best Practices Skill gives AI agents specialized guidance for writing, debugging, and maintaining Playwright tests in TypeScript. It's designed for any repo where you work with Playwright—whether you're writing E2E, component, API, visual regression, accessibility, security, Electron, or browser extension tests.
What it covers:
- Writing tests — Structure, locators, assertions, waiting strategies, Page Object Model, fixtures, test data, annotations
- Debugging & troubleshooting — Trace viewer, flaky tests, selectors, timeouts, race conditions, console errors
- Specialized testing — Accessibility (axe-core), mobile/responsive, component testing, iframes, canvas/WebGL, service workers/PWA, i18n/localization, Electron apps, browser extensions
- Browser APIs & real-time — WebSockets, geolocation, permissions, clipboard, camera/microphone, multi-tab/popup flows, OAuth
- Error & edge cases — Error boundaries, offline mode, network failures, form validation
- Multi-user scenarios — Collaboration testing, role-based access, concurrent actions
- Security & performance — XSS, CSRF, auth security, Web Vitals, Lighthouse, performance budgets
- Infrastructure — Project config, CI/CD, parallel execution, sharding, global setup/teardown, test coverage
- Advanced patterns — GraphQL mocking, HAR recording, third-party services (payments, email/SMS)
The skill is activity-based: the AI is directed to the right reference depending on what you're doing, so you get focused advice without loading everything into context.
Core Testing
| Topic | Reference | Use for |
|---|---|---|
| Debugging | debugging.md | Trace viewer, inspector, common issues |
| Flaky tests | flaky-tests.md | Detection, diagnosis, fixing, quarantine |
| Test organization | test-organization.md | Structure, config, E2E/component/API/visual tests |
| Locators | locators.md | Selectors, robustness, avoiding brittle locators |
| Assertions & waiting | assertions-waiting.md | Expect APIs, auto-waiting, polling |
| Page Object Model | page-object-model.md | POM structure and patterns |
| Fixtures & hooks | fixtures-hooks.md | Setup, teardown, auth, custom fixtures |
| Test data | test-data.md | Factories, Faker, data-driven testing |
| Annotations | annotations.md | skip, fixme, slow, test steps |
Specialized Testing
| Topic | Reference | Use for |
|---|---|---|
| Accessibility | accessibility.md | Axe-core, keyboard nav, ARIA, focus management |
| Mobile testing | mobile-testing.md | Device emulation, touch gestures, viewports |
| Component testing | component-testing.md | CT setup, mounting, props, mocking |
| File operations | file-operations.md | Upload, download, drag-and-drop |
| Clock mocking | clock-mocking.md | Date/time mocking, timezones, timers |
| WebSockets | websockets.md | Real-time testing, SSE, reconnection |
| Browser APIs | browser-apis.md | Geolocation, permissions, clipboard, camera |
| Multi-context | multi-context.md | Popups, new tabs, OAuth flows |
| Multi-user | multi-user.md | Collaboration, RBAC, concurrent actions |
| iFrames | iframes.md | Cross-origin, nested, dynamic iframes |
| Canvas/WebGL | canvas-webgl.md | Canvas testing, charts, WebGL, games |
| Service workers | service-workers.md | PWA, caching, offline, push notifications |
| i18n | i18n.md | Locales, RTL, date/number formats |
| Electron | electron.md | Desktop apps, IPC, main/renderer process |
| Browser extensions | browser-extensions.md | Popup, background, content scripts, APIs |
| Error testing | error-testing.md | Error boundaries, offline, network failures |
| Security testing | security-testing.md | XSS, CSRF, auth security, authorization |
| Performance testing | performance-testing.md | Web Vitals, budgets, Lighthouse |
Infrastructure & Advanced
| Topic | Reference | Use for |
|---|---|---|
| CI/CD | ci-cd.md | Pipelines, sharding, Docker |
| Performance | performance.md | Parallel runs, optimization |
| Global setup | global-setup.md | globalSetup/Teardown, DB migrations |
| Projects | projects-dependencies.md | Project config, dependencies, filtering |
| Test coverage | test-coverage.md | V8 coverage, reports, thresholds, CI |
| Network advanced | network-advanced.md | GraphQL, HAR, request modification |
| Third-party | third-party.md | OAuth, payments, email/SMS mocking |
| Console errors | console-errors.md | Capturing and failing on JS errors |
When the Skill Is Used
The skill triggers automatically when the AI infers you need help with Playwright-related tasks. You don't have to mention "skill" or "Playwright best practices"—just describe your task and the AI will use the skill when it's relevant.
Example prompts:
- "Fix this flaky login test" → The agent pulls in debugging and assertions guidance
- "Add a test for the checkout flow" → The agent uses test organization and locator best practices
- "Refactor these tests to use Page Object Model" → The agent references POM patterns and structure
- "Why is this test timing out in CI?" → The agent consults debugging and CI/CD references
- "Set up parallel execution for our test suite" → The agent uses performance and CI/CD guidance
- "Add accessibility tests for the dashboard" → The agent uses axe-core and keyboard navigation guidance
- "Test the mobile layout and touch gestures" → The agent references device emulation and touch patterns
- "Mock the payment gateway in tests" → The agent uses third-party service mocking patterns
- "Test the real-time collaboration feature" → The agent references multi-user and WebSocket testing
The skill activates for tasks like:
- Writing new E2E, component, API, visual regression, or accessibility tests
- Testing mobile/responsive layouts, touch gestures, or device emulation
- Implementing file uploads/downloads, date/time mocking, or WebSocket testing
- Handling OAuth popups, geolocation, permissions, or multi-tab flows
- Testing iframes, canvas/WebGL, service workers, or PWA features
- Testing Electron desktop apps or browser extensions
- Internationalization (i18n), locales, RTL layouts, or date/number formats
- Testing error states, offline mode, or network failure scenarios
- Security testing (XSS, CSRF, authentication, authorization)
- Performance testing with Web Vitals or Lighthouse
- Reviewing or refactoring Playwright test code
- Fixing flaky tests or debugging failures
- Setting up CI/CD, test coverage, or global setup/teardown
- Configuring projects, dependencies, parallel runs, or sharding
Get Started
Install the skill and start building better Playwright tests:
npx skills add https://github.com/currents-dev/playwright-best-practices-skill
After installing, the AI will automatically use the skill when your questions or tasks involve Playwright—no manual configuration required.
The skill is open source and we're updating it regularly to keep it aligned with the latest Playwright best practices. If you have feedback or suggestions, open an issue on GitHub—we're building this in the open, and your input shapes what comes next.
Join hundreds of teams using Currents.
Trademarks and logos mentioned in this text belong to their respective owners.


