Categories
Life events

Getting up close and personal with AI

The University of Oxford AI Programme was not just a technical deep dive into AI; it was a personalised strategic roadmap for navigating the Fourth Industrial Revolution and understanding how AI, as a general-purpose technology, will transform every sector of our economy.

I recently completed the University of Oxford Artificial Intelligence Programme, a comprehensive exploration of a technology that some experts describe as ‘more profound than electricity or fire’. The course was not just a technical deep dive; it was a strategic roadmap for navigating the Fourth Industrial Revolution and understanding how AI, as a general-purpose technology, will transform every sector of our economy.

The programme provided a rigorous foundation of AI, beginning with the 60-year history highlighting key moments in its evolution, from ‘AI winters’ to the pervasive nature of the technology today, driven as it is by massive computational power and data consumption. The history alone was worth studying.

Throughout the six modules, I gained a deep understanding of the technical landscape, distinguishing between Predictive AI (classification and forecasting, such as mapping out crime hotspots), Generative AI (creating new content via LLMs, such as the image in this blog), and the emerging field of Agentic AI – autonomous systems capable of iterative planning to solve complex goals (for example, planning and booking your holiday’s travel and accommodation).

The programme also explored the mechanics of Machine Learning, including supervised, unsupervised, and reinforcement learning, and how Artificial Neural Networks mimic the human brain to process information. We also got to know key characters in the development of AI, such as Nobel laureate Professor Geoffrey Hinton, otherwise known as the ‘godfather of AI’. Listening to interviews with him in the last couple of years is incredibly enlightening.

Digging into AI’s journey – the highs and lows – was key to understanding why ChatGPT made such an impact at the turn of 2022/23. One key takeaway from this element of the programme was being introduced to Amara’s Law: we tend to overestimate technology in the short run but underestimate its impact in the long run. This reinforces the massive investment in AI start ups and its associated hype today.

From a business perspective, the course reframed AI as a tool that significantly lowers the cost of routine activity; AI allows organisations to make faster, cheaper, and more accurate decisions, which in turn increases the value of human judgment and actionable insights, characteristics beyond the ability of AI at present. I learned to identify high-ROI opportunities by focusing on incremental innovation – small, process-improving changes – rather than high-risk ‘moon shot’ projects. This is often the mistake business leaders make. We have all seen the reports that many organisations are failing to see a financial return on their investment, largely (although not exclusively) because of a lack of rigour when conceiving AI’s role in an organisation’s operations.

For our final assignment (one a week, every week, no excuses!), we needed to submit an actionable business case to introduce AI into our business. Through this strategic lens, I am now able to help others make vital decisions, such as, amongst other challenges, squaring the ‘make vs. buy’ dilemma. This ensures organisations have the opportunity to uniquely configure AI assets to gain a competitive advantage, rather than just buying generic, off-the-shelf tools that competitors also use.

Perhaps most importantly, the programme addressed the three major pitfalls of AI: privacy, bias, and ‘explainability’. The first two have been well documented, but the need to understand why an AI model has made a particular decision, especially in a medical screening scenario, remains a challenge. Understanding these risks is essential for maintaining an organisation’s reputation and credibility, as well as complying with emerging regulations, such as the EU AI Act.

We also looked at the impact on the workforce, with the programme moving past the much documented fear of displacement, towards the concept of ‘Centaurs’ – human-AI hybrids – where employees use their creative problem-solving and social intelligence to oversee and interact with algorithms. On the whole, AI will augment the existing workforce, not displace it. The programme highlighted the well known quote; ‘AI will not replace humans, but humans who use AI will replace those who do not.’

As someone posted on LinkedIn this week, rather than doing mundane, repetitive tasks for which we were never ideally suited, learned over decades, humans need to get back to what they are best at; using creativity, ingenuity and judgement.

The programme also didn’t shy away from the bigger questions around how AI could develop in the coming years. The singularity, when AI is able to meet and exceed human capabilities, is some way off. But the application of AI today, when in the wrong hands, is the greatest concern. Putting autonomous weapons and rogue biological weapon research in the wrong hands are the primary concerns. When Isaac Asimov created the Three Laws of Robotics, an ethical framework for AI safety first introduced in 1942, they were designed to prevent robots from harming humans. The laws state: 

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. in ways that break the original.

We have to hope that those of a particular political persuasion, coupled with personal ambition, are prevented from running free when it comes to AI development.

So, should you attend the programme? I would strongly recommend it to any senior leader or decision-maker, mainly because it provides the conceptual clarity and building blocks needed to lead technology-driven change effectively. With attendees from around the world able to meet up and chat on programme-related forums, the ability to share knowledge with professionals from different sectors (and meet in person) helped cement the sense of learning. The programme allowed me to move beyond the hype to a place where I am now able to offer a practical framework for managing AI in operation, ensuring that systems are continuously monitored to avoid the three pitfalls previously mentioned.

Personally, it’s been a huge step forward in my understanding of the future of work and the world around us. The programme has given me an invaluable insight to AI technology and its ethical implications, enabling me to confidently offer advice on adoption, while building safe, trustworthy, and human-centric AI systems.

Leave a comment