During the day, it became clear that many discussions centred on speed – the need to build, launch and scale quickly. But for organisations operating in complex and regulated environments, speed alone is not a sufficient strategy.
As part of the event, a video was shown in which Pontus Holmberg, CPO at Roaring, highlighted a central question for AI in risk and compliance: When AI is used to support real decisions, technology that can formulate convincing answers is not enough. The analysis must also withstand scrutiny.
Three critical requirements for AI in regulated environments
According to Pontus, the following three factors are critical for organisations to be able to trust the technology:
- Traceability: The analysis must be traceable, making it possible to document exactly how a decision has been made.
- Weighting of evidence: AI must be able to weigh data and evidence against each other, rather than simply generating a credible answer.
- The right use of language models: Language models should be used where they add the most value – for summarisation and documentation support – not as the analysis engine itself.
Roaring Hybrid Intelligence
This perspective forms the foundation for the development of Roaring Hybrid Intelligence. The solution is built on an architecture where AI is used to calculate and structure the analysis, while the language model is used to explain the result. This enables organisations to combine efficiency with control, transparency and real-world applicability.
The conversations at Breakit AI Day confirmed that while AI is often about what can be built quickly, in regulated environments it is just as much about what can truly withstand scrutiny.
