Holmes Secures €1.1M to Reimagine QA for AI-Driven Development
Holmes has launched with €1.1 million in pre-seed funding to build an autonomous software testing platform designed for teams moving at AI speed. The round was led by Syndicate One, joined by founders of Aikido, Showpad’s co-founder Louis Jonckheere, serial entrepreneur Thomas Van Overbeke, and funds including NewSchool.vc, RDY Capital, and 100IN. The startup is led by co-founders Robin Praet, Robbrecht Delrue, and Sofie Buyse, who bring prior exit experience from products such as Smartendr and Henchman. Their thesis is straightforward: AI accelerates code creation, but quality assurance still relies heavily on manual test writing and click-through validation. That mismatch creates a bottleneck just as development accelerates. Holmes aims to use the fresh capital to deepen its AI QA platform, grow its engineering and product teams, and expand beyond its early design partners, positioning itself as a next-generation AI QA platform for continuous testing automation.

From Manual Scripts to Autonomous Software Testing
Holmes positions itself as an autonomous QA platform that learns how products actually work and how users interact with them, instead of relying on fragile, predefined scripts. By observing user journeys—from sign-up and login to search, forms, navigation, checkout, and error flows—the platform constructs a model of critical paths through a web application. It then generates and continuously updates tests that track those flows as the product evolves. Five specialised AI agents handle core areas such as happy-path behaviour, edge cases, responsive layouts, accessibility, and error recovery. This architecture is designed to minimise the need for developers and QA staff to manually maintain test suites. The result is a continuous testing automation engine that runs inside the tools teams already use, turning QA into a background process instead of a release-stage chore, and aiming to catch issues before they reach real users.
Closing the Gap Between AI-Speed Delivery and QA Constraints
Holmes is explicitly targeting a structural gap in modern software delivery: code is written faster than ever with AI assistants, but testing remains largely manual. Engineers are still expected to write and maintain tests, while QA specialists or product managers manually click through interfaces to confirm everything works. According to Holmes’ founders, that process doesn’t scale when releases become more frequent and complex, forcing teams to trade off between speed and reliability. Bugs that slip through can erode user trust and slow growth. Holmes’ autonomous software testing approach aims to absorb that pressure by continuously validating real user flows as development progresses. Instead of asking whether the code is syntactically correct, its AI QA platform focuses on whether the product behaves as users expect in real-world environments, offering an automated safety net for teams embracing aggressive, AI-accelerated release cadences.
An Emerging Category of AI-Powered Bug Detection Tools
Holmes reflects a broader shift toward AI-powered bug detection tools that extend beyond traditional unit tests and scripted UI checks. As organisations adopt AI coding assistants, they increasingly need QA systems that can keep up with shorter development cycles and more frequent deployments. Holmes’ model—using multiple AI agents to observe behaviour, learn user journeys, and run autonomous test suites—signals an emerging category where QA becomes a continuous, data-driven process rather than a discrete project phase. The company is already collaborating with around 30 design partners, using their feedback to refine its autonomous QA platform before broader rollout. Backed by experienced advisors from established technology companies, Holmes is betting that future software teams will rely on continuous testing automation as a standard part of their development stack, shifting QA ownership from overburdened humans to specialised AI systems.
