Introducing Currents MCP Server
Connect your AI tools to Currents using MCP
Today, we’re excited to share Currents MCP server 2.0. You can use this server to connect agents like Cursor, Claude or VS Code directly with Currents.
What is MCP?
MCP stands for Model Context Protocol. It's an open pattern, introduced by Anthropic, that provides a consistent way for systems to expose tools and resources that can be used by AI models.
In our case, Currents MCP server acts as a context layer for any tool that can leverage information about a run, such as the spec list, failed tests, errors, and more.
Tool | Description |
---|---|
currents-get-projects | Retrieves a list of all projects available. |
currents-get-run-details | Retrieves details of a specific test run. |
currents-get-spec-instances | Retrieves execution results of a spec file. |
currents-get-spec-files-performance | Retrieves spec file historical performance metrics for a specific project. |
currents-get-tests-performance | Retrieves test historical performance metrics for a specific project. |
currents-get-tests-signatures | Returns Test Signature (filtered by spec file name and test name). Allows an agent to find test results of a specific test. |
currents-get-test-results | Retrieves test results of a test, filtered by Test Signature. |
Currents MCP
The first version of our MCP server was intentionally simple, but it came with some friction: agents needed a runId to fetch data. In v2, we’ve removed that limitation. Agents can now access the full execution history of your tests on their own, giving them the context they need without extra input.
Tools
Currents MCP server 2.0 expose several tools:
Tool | Description |
---|---|
currents-get-projects | Retrieves a list of all projects available. |
currents-get-run-details | Retrieves details of a specific test run. |
currents-get-spec-instances | Retrieves execution results of a spec file. |
currents-get-spec-files-performance | Retrieves spec file historical performance metrics for a specific project. |
currents-get-tests-performance | Retrieves test historical performance metrics for a specific project. |
currents-get-tests-signatures | Returns test signature (filtered by spec file name and test name). Allows an agent to find test results of a specific test. |
currents-get-test-results | Retrieves test results of a test, filtered by test signature. |
Here are some examples of AI prompts:
- 🔍 “Please fix this test” → It pulls the last execution data automatically.
- 🐞 “What were the top flaky tests in the last 30 days?” → It finds and analyzes them across projects.
- ⚡“What were the slowest specs this week?” → It retrieves performance metrics by itself.
- 🧪 “Please fix all my flaky tests” → It investigates, creates a plan, and suggests fixes.
Vibe-fix failing testings 😜
Read our Setup Guide to get started with Currents MCP.
Here is what the workflow looks like:
- A test run fails in CI.
- You ask an AI agent to help you fix the failing tests.
- Currents exposes details about the run and the failed tests via the MCP server.
- The AI agent consumes that context, analyzes historical data and the failure reason, and proposes a code change.
- The user can then trigger a new test run, and ask the AI agent to verify the fix.
This cycle drastically cuts down on the time developers spend manually digging through logs to figure out what broke.
What’s Next
This is just the beginning. Here’s what we’re exploring next:
- 🔄 Allowing the AI to re-run failed tests for you.
- 🤖 Bi-directional feedback loops (MCP automatically validates that a fix worked).
If there’s something else you’d love to see, let us know — we’re building this in the open, and your feedback shapes the roadmap.
Join hundreds of teams using Currents.
Trademarks and logos mentioned in this text belong to their respective owners.