Funding Momentum Signals a New Era for Quality Assurance
Holmes has secured €1.1 million in pre-seed funding to build an autonomous QA platform aimed at teams shipping software at AI speed. The round was led by Syndicate One, joined by founders and executives behind tools such as Aikido and Showpad, as well as several early-stage funds. The founding trio—Robin Praet, Robbrecht Delrue and Sofie Buyse—bring prior exit experience from ventures in legal-tech and hospitality software, giving them first-hand exposure to how quality assurance often lags behind rapid product development. Their thesis is simple: AI-driven development is rewriting how code is produced, but traditional testing still depends on manually written and maintained cases. Holmes wants to close this gap by turning QA into a continuous, autonomous process that scales alongside AI-powered engineering, rather than acting as a brake on release velocity.

From Manual Scripts to Autonomous QA Platforms
Traditional software testing automation relies heavily on engineers scripting tests, updating them when interfaces change, and QA specialists manually clicking through applications. As products grow, these workflows become brittle and time-consuming, which is why dedicated QA teams often emerge only after testing has already become a bottleneck. Holmes’ autonomous QA platform takes a different approach. Instead of starting from predefined scripts, it learns how a product works by observing real user flows—sign-up, login, search, checkout, navigation and forms—and continuously turns those journeys into living tests. By running inside the tools development teams already use, it minimizes process friction and reduces the need for manual oversight. This shift from scripted automation to autonomous testing is critical for organisations seeking true continuous testing, particularly when release cycles are measured in hours or days rather than weeks.
Continuous Testing for AI-Driven Development Cycles
AI-driven development has dramatically accelerated the pace at which new features ship. Tools like AI code assistants help developers generate code faster, but that speed exposes a new risk: what compiles and appears correct in code may fail in real-world user scenarios. Holmes targets exactly this gap. Its autonomous QA platform continuously evaluates user-facing flows and updates tests as the product evolves, enabling continuous testing without manual workflows. Five specialised AI agents cover happy paths, edge cases, responsive layouts, accessibility and error recovery, extending beyond purely functional checks. By running in the background, the platform catches issues before they reach users, allowing teams to preserve the speed benefits of AI without trading off reliability. In AI-driven development environments, this kind of always-on validation is quickly becoming a necessity rather than a luxury.
Reducing Manual QA Workloads and Ownership Gaps
Many teams experience QA as critical work that nobody truly owns. Skilled QA professionals are expensive and scarce, so testing often lands on product managers and developers who already juggle roadmaps, stakeholder requests and coding tasks. When release deadlines loom, manual regression testing is frequently the first activity to be compressed, allowing bugs to slip through. Holmes addresses this organisational pain point by taking routine testing off human plates and running it autonomously. The platform continuously validates core user journeys, freeing product managers and engineers to focus on feature design, architecture and customer feedback. As companies scale, this reduces the need to rapidly hire large QA teams while still increasing coverage. For organisations striving to keep pace with AI-powered product iterations, autonomous QA becomes a structural solution to the testing ownership gap, rather than a temporary patch.
