What connects Pope Leo XIV, an AI conference and wisteria?

19th May 2025

AI conference

I spent yesterday at the AI Ethics, Risks and Safety Conference at Bristol’s Watershed, organised by Karin Rudolph. We’re usually filming and interviewing at events like these, so it felt a real treat to immerse myself in this world just as an attendee. AI is relevant to so much of our client work, but this kind of conference also hits a sweet spot for me – a Human Scientist at heart– because of the interdisciplinary challenges required to understand, and hopefully guide, the AI revolution.

The sessions kicked off with Dr Amy Dickens from DSIT discussing the challenges of addressing governance challenges. DSIT is developing an AI Management Essentials self assessment tool, inspired by the UK Cyber Essentials scheme, something which we as an SME have found useful.

Next up was Arcangelo Leone de Catris from the Alan Turing Institute who works on the BridgeAI project where they’ve built an AI use case framework for specific sectors. I asked why he had picked out the example of automated snow clearing of train carriages. He replied that there wasn’t a particular reason for this, however as a film maker this is exactly the kind of use case that is human, tangible and memorable: a weather event; a staff resource challenge and some clever image analysis tech to come up with smart solutions.

Session two featured Owen Parsons from the Mind Foundry who discussed Human in the Loop versus AI in the Loop methods of training LLMs. He used a neat example of training a chatbot for bird enthusiasts, and how early human correction of misinterpreting words like ‘cardinal’ combined with later AI refinement improved accuracy. Then came Geoff Keeling from Google to discuss Ethics and Agentic AI. Geoff struck me as someone with so many possible AI futures simultaneously firing in his brain – a fair number of them dystopian – that I wanted to ask him how he ever managed to quieten his mind. The breadth and depth of analysis and future-gazing was as impressive as it was unsettling. What if AI made everyone with malicious intent slightly more efficient? What if your AI agent jailbreaks my AI agent? What about AI-driven deskilling? If social media was a trial run for scrambling children’s minds, what on earth will agentic AI do? While there was a nod to AI's potential to have a rising tide effect and as an educational leveller, any optimism soon gave way to the more logical, and potentially negative, outcomes. Cassandra prophecies might not be ideal for our videos, but to be fair, Geoff clearly framed AI as a political and democratic challenge, which felt reassuring in a room full of potential policy influencers.

Sam Young from Catapult Energy Systems offered a more optimistic perspective, highlighting AI's potential to tackle complex societal problems, particularly in energy use. His talk covered large-scale infrastructure management, localised examples like using data centre heat for swimming pools and how AI can boost heat pump adoption by upskilling engineers with real time diagnostics. He also gave a brief insight into the energy efficiency of different AI models, a perfect segue into Richard Gilham's presentation from BRICS. The first phase of their Isambard-AI's supercomputer is ranked second in the world for energy efficiency. The speed at which BriCS are bringing this phenomenal piece of UK capability online is truly remarkable, and our team at Beeston Media are proud to be working with Emily Coles to document its progress, promote existing use cases and encourage partnerships for future ones. This led us neatly into the final session which was a panel talk hosted by Ben Byford from the Machine Ethics podcast.

The panelists were Lucy Mason of Capgemini, with whom I had already enjoyed a chat with at lunch, and Simon Fothergill. Lucy very wisely pointed out any predictions about AI’s future are likely to be wrong. Simon likened AI to a weird hall of mirrors offering an automated prediction of information. Mindful of our work with Isambard AI, I asked which areas of AI the UK should be specialising in. Lucy sees 4 main areas: City financial services, higher education, creative industries, and her area of defence and security. There was a sidebar into the exploitation of creatives and the power dynamics of artists like will.i.am being able to embrace AI. Well done to Ben for raising the issues faced by emerging creatives. How I view this, having had a long career in the media is another blog entirely…

Overall it was an incredibly energising day. It’s a privilege to gain insights into such a diverse range of sectors. These overviews are invaluable when shaping messaging for individual clients and understanding where their solutions fit within the broader UK landscape. Oh and the random mention of the new Pope is because he featured in somebody’s slide (Karin’s intro?) because in his opening address he said AI poses new challenges for human dignity and justice. And he makes a better picture than the snaps I took of the talks. Likewise the wisteria at the end of my garden. This is where I’ll be offline this weekend reading one of the books that I was recommended yesterday, blissfully avoiding Geoff's cautionary tales echoing in my mind.

Big thanks to Karin, the volunteers, sponsor University of Bristol and Bristol Foresight.

Pope
Garden