top of page
All Posts


Why In-House Teams Inherit Problems They Didn’t Create
If you have worked in-house for more than five minutes, you have probably felt it, you arrive with a remit to “enable the business”, and quickly discover you are also inheriting a backlog of decisions you did not make. It is rarely malicious. It is usually the by-product of speed, decentralised decision-making, staff turnover, and a genuine belief that “we will tidy it up later”. The issue is that “later” often arrives in the form of an audit, a dispute, a cyber incident, a n
Trent Smith
Jan 238 min read


AI for HR: Improve Policies, Compliance, Investigations and Drafting with Contract Cloud
HR work is demanding. The team must interpret policies, manage performance issues, run investigations, handle grievances, navigate restructures, and maintain accurate records that may be reviewed years later by auditors, regulators or tribunals. At the same time, HR sits in the middle of corporate compliance , legal compliance and employee experience . Contract Cloud is built for that purpose. It is 100 percent Australian owned , keeps data in Australia, and uses retrieval a
spantaleo
Dec 7, 20258 min read


AI and RFPs: How Contract Cloud Transforms Procurement From First Draft To Ongoing Management
RFPs are demanding. Procurement must coordinate business input, capture detailed requirements, keep responses aligned with policy, negotiate departures and then manage the contract for years afterwards. AI can remove much of the manual effort from that process, provided it is grounded in your own documents, policies and risk settings rather than generic internet content. Contract Cloud is designed for that purpose. It uses AI with retrieval augmented generation ( RAG ), keeps
Trent Smith
Nov 15, 20258 min read


AI Hallucinations: Why They Happen and How to Prevent Them
What Are AI Hallucinations? In artificial intelligence, a hallucination occurs when an AI system produces information that appears confident and factual, but is incorrect, fabricated, or misleading. These hallucinations are not deliberate falsehoods. They are a by-product of how large language models (LLMs) work: by predicting likely word sequences based on patterns in their training data. When the model lacks the right information or context, it still generates an answer, b
Trent Smith
Oct 26, 20254 min read
bottom of page
