The recent ScotSoft 2025 conference, hosted by ScotlandIS, brought together Scotland’s technology leaders, innovators and policymakers. For those of us advising on IP, IT and commercial disputes, it was an invaluable opportunity to horizon-scan and understand the issues our clients are facing now, and the risks that lie ahead.
A week on, with the dust settled and the key themes emerging, the message from ScotSoft is clear: in a world of dazzling tech, the real risk is misplaced trust.
AI isn’t yet a replacement for humans, supply chain compromise is inevitable, and ESG scrutiny will intensify. The winners will be those who embed proactive governance, build resilience into contracts and culture, and put human accountability at the centre. As we enter the age of resilience, innovation is no longer a luxury but a necessity and businesses must balance ambition with robustness.
Pressed for time? Our top five takeaways are set out first. Read on for a fuller exploration of the insights behind them.
TOP 5 KEY TAKEAWAYS:
- AI is powerful, but untrustworthy
- Supply chain attacks are inevitable
- Innovation needs flexibility in contracts
- Sustainability and sovereignty are rising flashpoints
- Humans are the premium value
1. AI is powerful, but untrustworthy
Talks on large language models (LLMs) – such as ChatGPT – highlighted their vulnerabilities: jailbreaking and prompt injection. These are ways of tricking an AI system into ignoring its built-in safeguards or instructions. For example, a ‘prompt injection’ might involve hiding a malicious instruction inside an input (such as a CV or email) that tells the model to disregard its normal rules and behave differently. Similarly, jailbreaking involves phrasing prompts in a way that bypasses restrictions on what the AI should say or reveal. Both methods can manipulate outputs or even expose confidential training data. In a CV-screening tool, for instance, a candidate could use such a trick to force the model to rank their CV above others, and in more sensitive environments, the consequences could be far more serious.
Risk: The risks here are significant. If confidential data leaks through manipulation of an AI system, businesses may face liability under data protection law, regulatory scrutiny, reputational harm and even disputes with vendors whose systems have failed to provide the promised safeguards.
Defences: While layered security measures such as firewalls, filters and adopting a zero-trust approach (where no user, device or system is automatically trusted until verified) can protect your traditional systems, they cannot prevent manipulation of LLMs. The outputs from any AI system must be treated with caution.
Legal takeaway: Contracts must allocate liability clearly and avoid over-reliance on vendor assurances. Clients should ensure internal governance requires human oversight for AI outputs. In other words, the buck should always stop with a human.
2. Supply chain attacks are inevitable
Case studies (including Marks & Spencer’s recent ransomware incident via an IT provider) demonstrated the reality of supply chain vulnerabilities. Even with strong internal training, reliance on third parties creates an unavoidable risk, increasing your organisation’s attack surface.
Risk: Phishing and spear-phishing attacks, compromised software updates or even tampered equipment can paralyse operations, with the compromise of a single weak link capable of cascading into widespread business interruption.
Defences: Supplier due diligence, contractual provisions requiring double-authentication for critical instructions and zero-trust architecture all have a role to play, but as one speaker put it, prevention can never guarantee immunity.
Legal takeaway: When a dispute arises, the strength of a party’s position will turn on the safeguards built into the contract. Warranties, indemnities, insurance cover and double authentication procedures must be made explicit. Future disputes will inevitably turn on whether suppliers were judged to have met “reasonable” standards of security.
3. Innovation needs flexibility in contracts
While much of the discussion around cybersecurity focused on the need for robust contractual safeguards, the next theme at ScotSoft offered a counterpoint: innovation also depends on flexibility. A systems-thinking panel explored how rigid frameworks often stifle innovation more than technical barriers. Skills shortages, procurement processes, and narrow IP licensing terms all hold back progress. The future of innovation lies in balance, contracts that are robust enough to allocate risk clearly, yet flexible enough to allow creative partnerships and change.
Risk: Companies risk being locked into restrictive licensing or R&D agreements that leave little room to pivot or collaborate, stifling innovation rather than supporting it.
Defences: The best defence is contractual flexibility, whether through adaptive licensing models or more collaborative procurement approaches.
Legal takeaway: Businesses should review their existing contracts to ensure they enable rather than obstruct innovation. Future-proofing clauses now are every bit as important as safeguarding intellectual property.
4. Sustainability and sovereignty are rising flashpoints
From Scottish Leather Group’s innovation in sustainable materials to warnings about heavy dependence on Cloud computing providers, as well as biodiversity risk, if there was one theme that echoed across sessions, it was sustainability. Raised repeatedly as both a challenge and an opportunity.
Risk: ESG reporting and the risk of “greenwashing” are increasingly attracting regulatory and litigation attention, while heavy dependence on just four cloud providers raises questions of digital sovereignty and resilience. In particular, the water consumed by data centres to cool ever more AI computing devices, often undisclosed, is coming under scrutiny.
Defences: The defences lie in transparent reporting, diversification of infrastructure and embedding sustainability into procurement practices – all rooted in a broader call for radical accountability.
Legal takeaway: Expect greater regulatory oversight, from environmental reporting to sovereign controls on cloud/AI data. Contracts and disclosures must withstand challenge.
5. Humans are the premium value
Futurist keynote speaker K D Adamson reminded us that while AI hype dominates headlines, people remain the true differentiator. Past technology booms have shown that the boldest claims often reflect corporate optimism more than lived reality.
Risk: Over-reliance on AI can create disputes when decisions go wrong. Accountability cannot be outsourced to an algorithm.
Defences: The defence lies in embedding governance frameworks that mandate human oversight, particularly in areas where decisions carry regulatory or reputational weight.
Legal takeaway: The enduring value is in human judgment, accountability and creativity. Clients should be encouraged to treat AI as a tool to augment, not replace, people, especially in high-stakes or regulated environments where responsibility must always rest with a human decision-maker.
Final Thought
ScotSoft 2025 confirmed that the age of disruption is giving way to the age of resilience. For businesses that means shifting focus from chasing shiny tech to embedding governance, resilience, and accountability at every stage.
As legal advisers, our job is to help businesses anticipate risks before they crystallise into disputes, through stronger contracts, proactive compliance, and an unflinching focus on human accountability in the digital age.
Contributors
Innovation & Technology Director
Legal Director
Solicitor