Pre-seed funding propels Holmes into the autonomous QA testing market
Holmes has emerged from stealth with a €1.1 million pre-seed round to build an autonomous QA testing platform designed for teams shipping software at AI speed. The round was led by Syndicate One, with participation from founders and executives behind companies such as Aikido and Showpad, as well as several early-stage investment funds. The founding trio—Robin Praet, Robbrecht Delrue, and Sofie Buyse—bring prior exit experience from Smartendr and Henchman, giving the startup strong product and go-to-market credentials. Holmes plans to use the capital to expand its product and engineering teams and to move beyond its current network of design partners. The funding underscores growing investor conviction that quality assurance must be reimagined for AI-driven development, where code is generated and shipped faster than traditional QA processes can reliably keep up.

Closing the gap between AI-driven development and legacy QA workflows
As AI-assisted coding tools accelerate delivery, development teams face a widening gap between rapid releases and slow, manual QA workflows. Engineers still spend considerable time writing and maintaining test scripts, while product managers and ad hoc testers click through user interfaces to validate key flows. Holmes positions itself squarely in this pain point by offering continuous software testing that runs inside the tools teams already use. Rather than verifying only whether code compiles or unit tests pass, Holmes focuses on whether the overall product behaves as users expect in real-world conditions. This approach aims to reduce reliance on manual regression cycles that often delay releases or, when skipped, allow defects to reach production. By embedding automated bug detection earlier in the lifecycle, Holmes wants to let teams preserve the speed gains of AI-driven development without trading away quality.
How Holmes’ autonomous QA platform works under the hood
Holmes differentiates itself from traditional testing frameworks by learning the product as users experience it, rather than depending on predefined scripts. The platform observes user interactions to understand complete journeys—from sign-up and login through checkout, search, navigation, and forms—and then generates tests that continuously verify those flows as the product changes. Five specialised AI agents focus on different dimensions of quality: happy paths, edge cases, responsive layouts, accessibility, and error recovery. This multi-agent setup allows Holmes to detect subtle regressions that might be missed by conventional automated suites. Crucially, the system keeps tests up to date automatically, aiming to eliminate the maintenance burden that often undermines test coverage over time. The result is a continuous software testing layer that runs in the background, catching issues early while teams continue to ship new features at high velocity.
Redefining QA ownership as teams scale without large test departments
Holmes’ founders argue that most software teams delay building dedicated QA departments, leaving testing work distributed among developers and product managers. This shared responsibility model becomes fragile as products and teams scale: manual testing grows linearly with feature complexity, while expectations for rapid releases keep rising. By automating end-to-end user journey validation, Holmes aims to remove much of this overhead and allow small teams to maintain enterprise-level quality standards. Continuous, autonomous QA testing is positioned as a way to catch bugs before they reach users, reducing the cost and reputational damage of production incidents. For organisations, the promise is the ability to sustain AI-driven development speeds without proportionally expanding headcount in QA. With guidance from experienced technology leaders and feedback from around 30 design partners, Holmes is betting that the future of software quality will be anchored in autonomous, always-on testing platforms.
