Blog_Lead (20)
T02QA1EAG-U05G7L4KAHE-c1b385a00b9c-512
Stuart Freer Equal Experts Alumnus

Gen AI Wed 28th February, 2024

AI: Fear or future? Australia’s industry leaders share their insights

AI continues to dominate the technology conversation but it isn’t new.  Traditionally the reserve of back-room developers in tech companies, the rise of gen AI means the tech has become democratised.   

As more people and businesses begin to use AI to automate the mundane or increase efficiencies, concerns about the risks of AI – no doubt aided by the many apocalyptic sci-fi screen depictions of AI – have also risen. 

So is AI the future of business growth or something that needs to be feared and restricted? This is the question that our latest Expert Talks event, held in Melbourne, sought to address. 

We invited experts from the worlds of finance, cyber security, software, health tech, telecoms and entertainment to share their insights, explain how they are using AI and debate the future of the technology. Our sell-out AI: Fear or Future? event was a huge success and, as MC of the panel, I’m delighted to share some of the exclusive insights here.

AI enabling a better future

Data already plays a significant role within healthcare, but for Sheena Peeters, GM Product and Technology at SiSU Health Group, there is an exciting opportunity to leverage data for AI-driven preventative healthcare. She said: “We can capture a vast array of information about patients – demographic, environmental etc. and use AI to predict, for example, whether a child is particularly at risk of a certain type of disease and whether there is early intervention or prevention available.”

Similarly, within gambling entertainment, AI is already helping to keep users safe, by using predictive models to understand customers and their behaviours better. Niall Keating, GM Technology, Data Platforms and Applications at Sportsbet, said: “We use deep learning to get a probabilistic forecast of our customers so we understand a customer’s normal behaviour. If a customer goes outside that normal behaviour we can take action, such as asking them to set a deposit limit or stop the deposit and get them to talk to a human.”

More generally, AI is also being used by companies to help their employees be more efficient, for example enabling mobile workers to use speech-to-text technology to make notes, draft emails or create documents while on the move between meetings. 

Bias, regulations and the role of organisational culture

Our panel debated some of the risks and fears associated with AI, but as Sheena explained, a fear of tech isn’t new. She said: “As technology has evolved over the years there has always been fear. There was fear that the internet was going to take over the world and rule us all. And then there was fear of phones and smartphones. There is always going to be this fear of change. The fear of privacy, data privacy, data sharing – these things have been around for a while, not just with AI.” 

Data bias in AI is a widespread risk that needs to be addressed Andy Canning, Chief Technology Officer at Equal Experts AUNZ, said: “Bias in AI is driven by the data, it’s as simple as that. If you ask it to generate an image of a doctor you will get an image of a white, 40-50-year-old male wearing a big long white lab coat. And if you ask it to create a single banana it will create an image of two bananas because the data it knows is that bananas grow in pairs. To get unbiased results it’s critical to remove the bias in the data.”

Jack Latrobe, Enterprise Architect and Program Lead at Telstra, highlighted the importance of data annotation to remove bias from the AI model data sets. “Data annotation involves humans manually filtering through data that goes into these models and make sure it is appropriate,” he said. “It is still a human involved that has to keep these systems safe.”

The removal of humans from the equation and putting jobs at risk is another criticism levelled against AI. However, Mirella Robinson, Former CIO / CTO of Cbus Super, believes that roles connected to AI are going to expand and grow in the future. “Data quality is very human intensive so roles around data and data quality are going to proliferate. And they are going to have to be diverse, in skills, in mindset, in demographics.”

Similarly, within cybersecurity where 8.6 billion attacks need to be blocked each day, humans are still needed to investigate the most complex cyber-attacks. Lisa Sim, Head of Marketing – JAPAC at Palo Alto Networks, added: “A part of what we have to do is to continue to educate the world to have safe and responsible use of technology and grow our industry with the experts that we need to find and stop complex cybersecurity issues.”

Responsible use of technology and AI

Responsible use of technology and AI regulation was also discussed as, in Australia, there are currently no specific regulations surrounding AI.  But the panel argued that doesn’t mean that businesses should act without care or concern. Sheena said: “I don’t want to wait until we have regulations in place and then everyone trying to quickly meet those standards. Acting responsibly and ethically is something that we have to take seriously and embedded in the day-to-day functions of the business. It’s imperative for us, particularly as leaders, that we embody those standards and processes before formal regulations even exist.” 

Niall also believes that organisational culture plays a role in how organisations adapt to AI.  “I think it comes down to mindset and if you have a culture of innovation. People change. Technology changes. and it’s up to the culture of your company to embrace that change for the future,” he said.

Supporting leadership teams to understand and use AI safely and effectively

Prioritising AI education at all levels is also something organisations need to consider now more sections of the workforce can access AI tools. Lisa said: “The approach to risk is the same as when it was cloud computing back in the day or AI or quantum computing or whatever comes next.  To demystify the risk of AI, leaders need to understand AI. Understand what data it is pulling and what it could touch in the organisation.”

Mirella agreed that boards and leadership teams in particular need the right support to make the right decisions around AI.. She said: “Boards appreciate that the hype around AI is going faster than they can keep up with and they need help to understand it. Education, based on returns and risk, is critical.”

This approach was also shared by Jack, who highlighted that different teams need different educational approaches. “At Telstra, we’re trying to train and teach engineers to use AI in ways that are safe, practical and efficient,” he said. “And for the tech leadership, we’ve run different style sessions to help them better understand the risks and opportunities of AI.”

For Sheena, it’s important to have a balanced conversation with boards around risk management and first understand what problem you are trying to solve with AI. She added: “ It’s OK to say to leaders, actually we need to do some experimentation and explore this in a small confined way. We may not get the results or we may. But we get a better understanding of the problem we’re asking AI to help us solve.”

We’d like to thank everyone who attended the event and in particular our panel members for sharing their expert insights. We’re already planning our next talk in Sydney and we look forward to seeing more of you there. 

Equal Experts has the expertise to help your business unlock the potential of AI and deliver unrealised revenue. Reach out to us if you want to explore Gen AI in your organisation.

Get in touch with us to find out how we can help you get the best from AI.