Holmes Secures €1.1M to Reimagine QA for AI-Speed Teams
Holmes has launched with a €1.1 million pre-seed round to build an autonomous QA platform tailored for teams shipping software at AI speed. The round was led by Syndicate One, joined by notable founders and investors including Aikido’s Roeland Delrue and Willem Delbare, Showpad co-founder Louis Jonckheere, and serial entrepreneur Thomas Van Overbeke, alongside funds NewSchool.vc, RDY Capital, and 100IN. The startup is led by co-founders Robin Praet, Robbrecht Delrue, and Sofie Buyse, all with prior exit experience in SaaS and AI tools. Their shared thesis: as AI-driven development accelerates code creation, software testing automation has not kept up, turning QA into a release bottleneck. With this funding, Holmes plans to deepen its product capabilities, grow its engineering team, and expand rollout beyond its early design partners who are already shaping the platform’s roadmap.

From Manual Scripts to Autonomous QA: How Holmes Works
Holmes positions itself as an autonomous QA platform that replaces manually written and maintained test suites with self-updating, AI-powered checks. Instead of relying on engineers to script every test case, Holmes learns how a web application behaves by observing real user journeys—such as sign-up, login, search, navigation, forms, and checkout. It then continuously generates and runs tests against these flows, even as the product evolves. Under the hood, five specialised AI agents handle different aspects of quality: happy paths, edge cases, responsive layouts, accessibility, and error recovery. This design aims to catch issues that may not be obvious from code alone but emerge in real-world usage. By embedding inside existing development tools and workflows, Holmes aspires to deliver continuous testing tools that run in the background, catching problems before they reach users while freeing developers and product managers from repetitive QA work.
Solving the Testing Bottleneck in AI-Driven Development
AI coding assistants have dramatically increased development velocity, but QA practices remain largely manual. Engineers still write and maintain tests, while testers click through interfaces to ensure everything works. According to Holmes, this mismatch forces teams into an uncomfortable trade-off: either slow down releases to test thoroughly, or ship quickly and risk bugs slipping into production. Co-founder Robin Praet notes that the real question is no longer whether code looks correct, but whether the product behaves as users expect in practice. Holmes targets this gap by focusing on user-facing flows and real-world behaviour, rather than static code analysis alone. For early-stage teams that rarely invest in large QA departments, and for scaling organisations where manual testing strains release schedules, an autonomous QA platform promises software testing automation that keeps pace with AI-driven development without ballooning headcount.
A New Category of AI-Native Continuous Testing Tools
Holmes exemplifies an emerging category of AI-native continuous testing tools built around autonomous agents rather than human-crafted scripts. Instead of treating QA as a separate, late-stage step, these platforms integrate directly into development pipelines and product analytics, continuously learning from user behaviour. Holmes’ approach—using multiple AI agents to monitor happy paths, edge cases, layout changes, accessibility, and resilience—signals a shift toward more holistic, user-centric quality assurance. The company is already collaborating with around 30 design partners and an advisory group of experienced technology leaders, indicating early market appetite for such tools. As more organisations adopt AI-driven development, investor interest in autonomous QA platforms is likely to grow. Holmes’ pre-seed round suggests that automated, self-maintaining testing systems may become a standard part of modern software stacks, reshaping how teams think about quality, ownership, and release confidence.
