Back

Building an AI-ready investigation team: People, skills and the right tools

The Comtrac Team

Jul 16, 2025

5

Min Read

Artificial intelligence has already shown it can be a game-changer for investigations, whether you are dealing with domestic and family violence, financial crime, regulatory breaches, or workplace misconduct. But one thing is clear: AI will never replace the experience and judgment of skilled investigators. 

Rather than taking over, AI is designed to work alongside investigators. It can handle routine tasks more efficiently, uncover connections that might otherwise go unnoticed, and give teams back valuable time to focus on complex decisions that rely on real-world context and sound ethical judgment. The real challenge now is ensuring your people, your processes, and your ways of working are ready to make the most of this powerful partnership. 

Human in the loop: Why investigators are still essential 

Despite all the hype around “automation,” truly effective investigations are never completely automated. AI can handle the heavy lifting by analysing thousands of documents in seconds, flagging unusual patterns, and connecting people, entities, and transactions. However, every flagged lead still requires human judgment to determine whether it is relevant, defensible, and meaningful. 

A skilled investigator brings context and insight that AI simply cannot replicate. They understand organisational culture, subtle nuances, and what is typical for their specific caseload. This is why the most effective approach keeps people involved at every step. AI can generate insights and surface information, but it is people who interpret the findings and make the final decisions. 

The danger of relying on public Large Language Models (LLMs)

With the explosion of public large language models (LLMs) like ChatGPT and other generative AI tools, many investigators are rightly asking: “Can I just use these free tools to draft documents or summarise evidence?” 

The reality is that public LLMs carry significant risks. When you upload sensitive information about individuals, cases, or organisations, you have no control over where that data might end up or how it could be used. There is no guarantee of data sovereignty or confidentiality. Even more concerning is that generic LLMs can sometimes “hallucinate” by producing content that sounds convincing but is actually inaccurate. In an investigation or legal context, this can quickly damage your credibility and create serious legal risks. 

The need for purpose-built, secure AI tools is clearer than ever. Investigators must be able to trust that their data stays protected, audit trails remain intact, and the AI is tailored to the standards and compliance requirements of their industry. 

What the NSW Supreme Court’s new rules tell us 

This balance between risk and benefit is unfolding in real time. In Australia, the NSW Supreme Court recently introduced new rules on the use of generative AI in legal proceedings, which shows just how quickly the conversation is evolving. 

According to the Practice Note, lawyers are permitted to use generative AI for tasks such as producing chronologies, indexes, witness lists or summarising transcripts. However, they are not allowed to rely on AI to prepare evidence or submissions without appropriate human oversight. Importantly, the rules emphasise that the technology must be used responsibly, with clear accountability for accuracy. 

This approach makes sense for investigations too. Targeted, explainable AI can be a trusted partner helping teams manage large volumes of information efficiently. But when it comes to analysis, conclusions, or decisions that could impact people’s lives, humans must remain firmly in control. 

Reach out to the Comtrac team at innov8@comtrac.com.au if you would like a detailed breakdown and response to the New South Wales Supreme Court Practice Note.

Upskilling: A new skillset for a new era 

Investigators don’t need to become data scientists to get the best out of AI, but they do need the confidence and skills to work with it properly. It’s important to understand how these tools process information, where they can fall short, and when to double-check or question what comes out. 

Being comfortable with data is now just part of the modern investigator’s skill set. Just as important is building a team culture where AI is seen as a helpful tool rather than something to be feared. Running pilots or proof of concepts, giving people practical training, and sharing real examples of what works can all help build trust and buy-in. 

Tune in to hear Comtrac Founder Craig Doran share insights from a successful AI proof of concept with state policing agency.

It’s also vital that investigators can put AI’s insights into real-world context. A flagged link might look interesting on paper, but does it actually matter for this case? Is it worth pursuing, or is it just noise? Knowing how to make that call is what keeps an investigation fair, defensible, and focused. 

On top of that, teams need a bit of technical confidence. They should feel comfortable tweaking settings, adjusting filters, and telling the system when it gets something wrong. Many AI tools improve with human feedback, so the people using them are an important part of making sure they keep getting better. 

Finally, everyone needs to feel like they have a stake in this. AI works best when people believe it’s there to help them do their jobs better, not take those jobs away. Giving investigators ownership and a clear role in how these tools are used is what turns AI into a real partner, not just another piece of software. 

A future-ready investigation team 

AI is changing investigations, but it does not change the fact that trust, ethics, and human insight remain at the centre of every investigation. 

Teams that invest in the right tools, such as the secure and targeted Comtrac Investigation Management Platform designed specifically for investigative work, will stay ahead. Teams that support these tools with strong governance and clear guardrails will maintain public trust. And teams that invest in their people by giving them the skills, confidence, and mindset to use AI effectively will enhance their ability to deliver effective and impactful investigations. 

As the NSW Supreme Court’s new guidance shows, this is a fast-moving space. Rules, risks, and opportunities will continue to evolve. However, one principle will remain true: AI is here to help investigators do what they do best more thoroughly, more efficiently, and with greater confidence than ever before.