Beyond Gen AI: Experts explore the benefits of AI in finance and superannuation

When we started planning our latest Melbourne Expert Talks event, we didn’t anticipate bulldozers becoming a central AI metaphor. But that’s the beauty of bringing experts together to talk about technology and leadership – you never know where the conversation might lead!

With a recent Gartner poll finding that 55% of organisations globally are already in pilot or production with Generative AI, we wanted to discover if Australia’s finance and superannuation sectors were investing in AI. At  “Beyond Gen AI: Navigating how businesses realise the benefits of AI”  our experts shared insights into how organisations are embracing the potential of AI and the foundational steps that organisations need to maximise their return on investment. As moderator of the panel, I’m delighted to share some of the discussions here. 

Combining Artificial Intelligence with Human Intelligence

Gen AI continues to dominate technology conversations, but more organisations are understanding that its efficacy hinges on human oversight.  Stephen Reilly Chief Operating Officer at HESTA illustrated this with the memorable bulldozer analogy: “No-one is surprised that a bulldozer can lift more dirt than a human with a shovel. But you still need a human to drive the bulldozer and decide where you want the dirt to go. It’s the same with AI. It’s no surprise that AI can do things faster than a human, but you still need a human in the process to get the best results.”

Claire Cornfield, Senior Executive Head of Customer Experience at La Trobe Financial, expanded on the importance of combining artificial intelligence with human intelligence, especially in organisations where trust is vital, such as financial services. ”We need to decide what we want the technology to do, what we want our staff to do and where each can add value. But we’ll still need humans to be involved in highly emotive or sensitive areas, even though these are also the more challenging jobs.”

Our panel also highlighted AI’s potential to improve customer experience and be used to create better outcomes for vulnerable people. For example, within healthcare, we are seeing the potential for AI to use data sets to predict health problems and enable early intervention. Andrea  Lymbouris, Head of Information Services at State Trustees, can see the potential in AI to provide improved personalised customer interactions within her teams. “AI could support our consultants access data about the client they are speaking to, when they last called and why, so the client doesn’t have to repeat all the information and we can give them the help they need quicker.”

Balancing AI’s risks and rewards

The finance and superannuation sectors, bound by regulatory constraints and financial responsibilities, can often be seen as cautious in their approach to new technologies. But even within this sector, each business will have a different appetite for risk, said Michael Collins, former Chief Information Security Officer at Judo Bank.  “Other businesses are going to run harder and run faster with AI –  they won’t have the sensitive data we do so they’re going to be able to take more risks. But within each organisation, it comes down to what your board and your senior management are comfortable with from a risk perspective.”

Stephen noted numerous potential AI applications in the competitive superannuation sector, such as personalised experiences and detecting anomalies in customer behaviour.  “But, as with everything with technology,  we have been very conscious with what we enable, “ he said. “We want to ensure it adds value, optimise our use and ensure we keep it secure.”

Claire also raised concerns about AI’s impact on talent development if it is used to automate some tasks. “A lot of AI-use cases are automating tasks that usually fall to entry-level roles, “ she said. “But these tasks, like dealing with calls in a customer centre, give people the breadth of experience that they can take into their long-term financial services careers.  If we take that work away, how are they going to get that experience?”

No doubt inspired by the Crowdstrike incident in the week before the event,  the panel also stressed the importance of human intervention in AI systems. Michael said: “Good AI needs three things – confidentiality, integrity and security.  But if an AI system went down, you would still rely on humans to be involved to fix it or maintain the service.” 

For Andrea, the biggest challenge is creating a strategy for the business when AI is advancing so rapidly and how businesses decide when to leverage AI capabilities built into the tools they already procure and when to decide to take a wider approach. Andrea added: “I think from a technology perspective we need to rapidly increase our skill set and expand our knowledge of AI.” 

Alongside AI strategies, getting funding for AI projects can also be a challenge, with limited investment models currently available to organisations to use within business cases. Michael said: “You’ve got to be very clear about why you’re asking for money and what you want to do because the business is always making trade-offs. But it’ll again come back to risk appetite and where you can pivot funding from in your current strategy.”

The importance of data quality and security

I’ve worked in digital transformation for a long time and I’ve seen the same questions about risk come up with each new technology – I remember people being horrified when we first introduced APIs in banking for example. But the difference I see with AI is that it is a technology that can be democratised and anyone can access tools like ChatGPT.  During the panel, I asked if it means that tasks we often put on the back burner, such as data cleansing or resolving our internal data permissions, now become consequential. The question of data quality is certainly one we hear from businesses looking to start working with AI, particularly for organisations looking after sensitive information for their customers or clients. 

Andrea said: “I think organisations are right to be thinking about protecting and securing the data. We need to be very mindful of it and put in place some additional risks and controls.” Michael also echoed the data security sentiment, adding: “You need to understand your data, where it is and how you’re going to use it before you start just running to the sexiest thing that’s on the internet and trying to install it and see how it goes.”

But Stephen also cautioned organisations waiting too long for their data to be perfect before embarking on an AI project. “Your data is never going to be perfect, so you have to figure out how to build in the margin for error for imperfect data. You have to drive forward. My encouragement is to test the quality of your data, overlay human intelligence onto your AI and embrace data governance people.”

Conclusion

We’d like to thank everyone who attended the event and our panel members for sharing their expert insights. 

We’re also delighted to announce that will be matching the total amount raised from the event, boosting the final figure to $1,500 donated to the  Aboriginal Investment Group’s Remote Laundry Project. This will power a laundry site for an entire year, giving remote aboriginal communities access to free laundry services to improve health and social outcomes. 

Watch out for details of our next Expert Talks event and If you’re interested in exploring Gen AI in your organisation, contact the Equal Experts Australia team.

In today’s world, data is an important catalyst for innovation, significantly amplifying the potential of businesses. Given the ever-increasing volume of data being generated and the complexity of building models for effective decision making, it’s imperative to ensure the availability of high quality data. 

What is data quality? 

Data quality measures how well data meets the needs and requirements of its intended purpose. High data quality is free from errors, inconsistencies and inaccuracies, and can be relied upon for analysis, decision making and other purposes. 

In contrast, low quality data can lead to costly errors and poor decision making, and impacts the effectiveness of data-driven processes. 

How to ensure data quality 

Ensuring high quality data requires a blend of process and technology. Some important areas to focus on are: 

  • Define standards: it’s important to define quality criteria and standards to follow. 
  • Quality assessment: assess criteria and standards regularly using data profiling and quality assessment tools. 
  • Quality metrics: set data quality metrics based on criteria such as accuracy, completeness, consistency, timeliness and uniqueness. 
  • Quality tools: identify and set up continuous data quality monitoring tools.  
  • Data profiling: analyse data to know its characteristics, structure and patterns. 
  • Validation and cleansing: enable quality checks on data to ensure validation and criteria happen in line with criteria 
  • Data quality feedback loop: Use a regular quality feedback loop based on data quality reports, observations, audit findings and anomalies. 
  • Quality culture: Build and cultivate a quality culture in your organisation. Data quality is everyone’s responsibility, not just an IT issue. 

Example Data Pipeline:

This diagram shows a basic data pipeline. It takes data from multiple source systems, putting data into various stages eg. RAW, STAGING and CONSUME. It then applies transformations and makes data available for consumers. To ensure the accuracy, completeness, reliability of the data:

  • There is a data quality check in between the source systems and data product, which ensures that quality data is being consumed, and 
  • There is a data quality check in between the data product and consumers, which ensures that quality data is being delivered

Data quality checks can include the following: 

  • Uniqueness and deduplication checks: Identify and remove duplicate records. Each record is unique and contributes distinct information. 
  • Validity checks: Validate values for domains, ranges, or allowable values
  • Data security: Ensure that sensitive data is properly encrypted and protected.
  • Completeness checks: Make sure all required data fields are present for each record.
  • Accuracy checks: Ensures the correctness and precision of the data. Rectify errors and inconsistencies in the data.
  • Consistency checks: Validate the data formats, units, and naming conventions of the dataset.

Tools/Framework to ensure data quality 

There are multiple tools and frameworks available to enable Data Quality and Governance, including: 

  • Alteryx/Trifacta (alteryx.com) : A one stop solution for enabling data quality and governance for data platforms. The advanced capabilities enable best practices in data engineering. 
  • Informatica Data Quality (informatica.com): Offers a comprehensive data quality suite that includes data profiling, cleansing, parsing, standardisation, and monitoring capabilities. Widely used in enterprise settings.
  • Talend Data Quality (talend.com): Talend’s data quality tools provide data profiling, cleansing, and enrichment features to ensure data accuracy and consistency.
  • DataRobot (datarobot.com): Offers data preparation and validation features, as well as data monitoring and collaboration tools to help maintain data quality throughout the machine learning lifecycle. Primarily Datarobot is an automated machine learning platform. 
  • Collibra (collibra.com): Collibra is an enterprise data governance platform that provides data quality management features, including data profiling, data lineage, and data cataloging.
  • Great Expectations (greatexpectations.io): An open-source framework designed for data validation and testing. It’s highly flexible and can be integrated into various data pipelines and workflows. Allows you to define, document, and validate data quality expectations.
  • Dbt (getdbt.com): Provides a built-in testing framework that allows you to write and execute tests for your data models to ensure data quality.

Data quality in action 

We have recently been working with a major retail group to implement data quality checks and safety nets in the data pipelines. The customer has multiple data pipelines within the customer team, and each data product runs separately, consuming and generating different Snowflake data tables. 

The EE team could have configured and enabled data quality checks for each data product, but this would have made configuration code redundant and difficult to maintain. We needed something common that would be available for the data product in the customer foundation space. 

We considered several tools and frameworks but selected Great Expectations to enable DQ checks for several reasons: 

  • Open source and free
  • Support for different databases based on requirement
  • Community support on Slack
  • Easy configuration and support for custom rules
  • Support for quality checks, Slack integration, data docs etc

We helped the retailer to create a data quality framework using docker image, which can be  deployed in the GCR container registry and made available across multiple groups. 

All the project specific configuration, such as Expectation Suite and Checkpoint file are kept in the data product’s GCS Buckets. This means the team can configure required checks based on the project requirement. Details about GCP Bucket are shared in the DQ image. The DQ image is capable of accessing the project specific configuration from Bucket and executing the Great Expectation suite on Snowflake tables in one place.

Flow Diagram: 

The DQ framework can: 

  • Access the configuration files from the data product’s GCP Buckets
  • Connect to Snowflake Instance
  • Execute Great Expectation suites
  • Send the Slack notification
  • Generate the quality report using Data Docs (In progress)

Quality assurance is a continuously evolving process and requires constant monitoring and audit. By doing this, we not only safeguard the credibility of the data but also empower our customer with the insights needed to thrive in a data-centric world.