AI Funding, OpenAI’s Canvas, and Cohere’s Model: This Week in AI
Gentrace raises $8 Million to Democratize AI Testing for Entire Product Teams
San Francisco, CA – Gentrace, a developer platform revolutionizing AI testing and monitoring, announced an $8 million Series A funding round this week.Led by Matrix Partners, this brings the startup’s total funding to $14 million.
Gentrace empowers entire product teams – from product managers and designers to subject matter experts and quality assurance – to collaborate seamlessly with engineering teams on evaluating AI model performance.[Image: Three men sit on a bench in front of a green grassy area and a city skyline behind them. From left: Vivek Nair, co-founder and CTO, Doug Safreno, co-founder and CEO, and Daniel Liem, co-founder and COO. Photo: Gentrace]
This latest funding round coincides with the launch of Gentrace Experiments, a groundbreaking tool designed for collaborative large language model (LLM) testing. Experiments allows teams to preview test outcomes before deploying models, anticipate potential errors, and streamline the advancement process.
“Generative AI is transforming software development, but there’s a critical need for clear, reliable methods to test and build these models effectively,” said Doug Safreno, co-founder and CEO of Gentrace. “we’re not just creating another developer tool; we’re reimagining how entire organizations can collaborate and build better LLM products.”
Gentrace’s platform addresses the growing demand for accessible and collaborative AI testing solutions as businesses increasingly integrate LLMs into their products and services.By democratizing the AI testing process, Gentrace empowers organizations to build more robust, reliable, and trustworthy AI-powered applications.
Gentrace Democratizes AI Testing with $8 Million Funding Round
san Francisco, CA - Gentrace, a platform aimed at revolutionizing AI testing and monitoring, has secured $8 million in Series A funding led by Matrix Partners, bringing their total funding to $14 million. This proclamation coincides with the launch of Gentrace Experiments, a tool designed for collaborative large language model (LLM) testing.
Gentrace empowers entire product teams, including product managers, designers, subject matter experts, and quality assurance professionals, to collaboratively evaluate AI model performance alongside engineering teams. This collaborative approach ensures a more holistic understanding of AI model capabilities and potential issues.
“Generative AI is transforming software development, but there’s a critical need for clear, reliable methods to test and build these models effectively,” said Doug safreno, co-founder and CEO of Gentrace.
Gentrace Experiments allows teams to preview test outcomes before deploying models, anticipate potential errors, and streamline the development process.
By democratizing AI testing, Gentrace aims to empower organizations to build more robust, reliable, and trustworthy AI-powered applications as businesses increasingly integrate LLMs.
