Currents Team
Currents Team

Introducing Currents MCP Server

Connect your AI tools to Currents using MCP

Introducing Currents MCP Server

Today, we’re excited to share Currents MCP server 2.0. You can use this server to connect agents like Cursor, Claude or VS Code directly with Currents.


What is MCP?

MCP stands for Model Context Protocol. It's an open pattern, introduced by Anthropic, that provides a consistent way for systems to expose tools and resources that can be used by AI models.

In our case, Currents MCP server acts as a context layer for any tool that can leverage information about a run, such as the spec list, failed tests, errors, and more.

ToolDescription
currents-get-projectsRetrieves a list of all projects available.
currents-get-run-detailsRetrieves details of a specific test run.
currents-get-spec-instancesRetrieves execution results of a spec file.
currents-get-spec-files-performanceRetrieves spec file historical performance metrics for a specific project.
currents-get-tests-performanceRetrieves test historical performance metrics for a specific project.
currents-get-tests-signaturesReturns Test Signature (filtered by spec file name and test name). Allows an agent to find test results of a specific test.
currents-get-test-resultsRetrieves test results of a test, filtered by Test Signature.

Currents MCP

The first version of our MCP server was intentionally simple, but it came with some friction: agents needed a runId to fetch data. In v2, we’ve removed that limitation. Agents can now access the full execution history of your tests on their own, giving them the context they need without extra input.

Tools

Currents MCP server 2.0 expose several tools:

ToolDescription
currents-get-projectsRetrieves a list of all projects available.
currents-get-run-detailsRetrieves details of a specific test run.
currents-get-spec-instancesRetrieves execution results of a spec file.
currents-get-spec-files-performanceRetrieves spec file historical performance metrics for a specific project.
currents-get-tests-performanceRetrieves test historical performance metrics for a specific project.
currents-get-tests-signaturesReturns test signature (filtered by spec file name and test name). Allows an agent to find test results of a specific test.
currents-get-test-resultsRetrieves test results of a test, filtered by test signature.

Here are some examples of AI prompts:

  • 🔍 “Please fix this test” → It pulls the last execution data automatically.
  • 🐞 “What were the top flaky tests in the last 30 days?” → It finds and analyzes them across projects.
  • ⚡“What were the slowest specs this week?” → It retrieves performance metrics by itself.
  • 🧪 “Please fix all my flaky tests” → It investigates, creates a plan, and suggests fixes.

Vibe-fix failing testings 😜

Read our Setup Guide to get started with Currents MCP.

Here is what the workflow looks like:

  1. A test run fails in CI.
  2. You ask an AI agent to help you fix the failing tests.
  3. Currents exposes details about the run and the failed tests via the MCP server.
  4. The AI agent consumes that context, analyzes historical data and the failure reason, and proposes a code change.
  5. The user can then trigger a new test run, and ask the AI agent to verify the fix.

This cycle drastically cuts down on the time developers spend manually digging through logs to figure out what broke.

What’s Next

This is just the beginning. Here’s what we’re exploring next:

  • 🔄 Allowing the AI to re-run failed tests for you.
  • 🤖 Bi-directional feedback loops (MCP automatically validates that a fix worked).

If there’s something else you’d love to see, let us know — we’re building this in the open, and your feedback shapes the roadmap.


Scale your Playwright tests with confidence.
Join hundreds of teams using Currents.
Learn More

Trademarks and logos mentioned in this text belong to their respective owners.

Recent Posts