AI is going to disrupt the asset management and superannuation sector, introducing both revolutionary capabilities and a new raft of inherent risks. CROs must take a hands-on role to ensure their institutions manage the risks and are still able to make the most of the AI opportunities.

Asset managers and superannuation funds are harnessing AI

AI has the capacity to revolutionise how funds are managed, member services are delivered, and core processes are run. 

Already, asset managers and superannuation funds are using AI for portfolio optimisation, risk assessment and predictive analytics. Firms are leveraging AI algorithms to undertake investment research and improve investment strategies by analysing market trends and identifying investment opportunities, supporting more efficient and informed decision-making. Other use cases we’ve seen in the market include:

  • Marketing and client service enablement: AI can track and analyse client interactions and behaviours to identify preferences and anticipate future actions, leading to more personalised and effective customer engagement strategies.
  • Legal and compliance: AI can continuously monitor transactions and communications, enabling 100% sampling of customer interactions whilst detecting suspicious activities and flagging them for further review. It is also being used to automate a small percentage of the work of legal and compliance, e.g. contract reviews.
  • Complaints and disputes: AI can be leveraged to effectively prioritise complaints based on their sentiment, urgency and complexity, and ensure they are directed to the most appropriate team or individual to process.

But AI opportunities come with inherent risks

Although AI brings significant opportunities, it also raises concerns about data privacy, bias and cybersecurity. Boards understand that any AI-related incidents will come with significant reputational damage and a loss of customer trust.

Eventually, robust regulatory frameworks will be put in place, but current legislation in Australia is not sufficient to protect against AI ‘harm’ and it will take time to draft and embed appropriate legal and regulatory frameworks. Just like everyone else, regulators themselves are still learning about and grappling with AI. It will take considerable time, consultation and international collaboration to arrive at standards that can be globally enforced.

Also, while global regulators broadly agree that frameworks must ensure AI delivers both economic and societal benefits while mitigating and minimising risks, different jurisdictions have divergent views on how best to regulate AI. This will create complex additional challenges for asset managers with operations in multiple jurisdictions. 

CROs must both manage AI risk and facilitate AI innovation   

As CROs adapt existing risk frameworks to the new reality of AI, their mandate remains effectively the same: support the business, enable value creation and protect the value the business has created. 

Many elements of the task are familiar. CROs are playing a crucial role in helping their organisations to govern AI by ensuring the risks associated with AI implementation are identified, assessed and managed effectively. CROs are also responsible for drafting risk appetite statements, establishing comprehensive controls and oversight for the AI risks identified. And, of course, developing risk management frameworks that address the ethical, legal, and operational implications of AI technologies within the organisation. This includes identifying and managing the risks:

  • Introduced by vulnerabilities in current AI-infused tools, systems and processes.
  • That AI creates unreliable, unfair or incorrect outputs due to bias, malfunction or hallucinations.
  • Associated with privacy breaches, or intellectual property and copyright theft linked to the use of AI.
  • In the supply chain where third-party suppliers use AI to deliver outcomes for funds.
  • Of colleagues using AI tools, supported and unsupported, to fulfil their role.

Many CROs are proactively collaborating across their organisations to establish policies, procedures and compliance frameworks that promote responsible and compliant use of AI aligned with the organisation's risk appetite and current and incoming regulatory standards. 

They are also investing time in educating themselves and their boards not only about the nature of the risks AI presents but also the importance of safely enabling AI-based innovation. When it comes to AI adoption, the tone from the top is important. Organisations cannot be afraid of AI, institutions should be focused on how to safely embrace the efficiency, accuracy and customer experience upsides. Leaders must build a ‘test and learn’ culture, emphasising that innovation will be supported and encouraged if AI is used within appropriate guardrails. 

This is where the CRO comes in. In a clear evolution of the traditional role, CROs must proactively support their businesses to leverage the opportunities that AI brings. This is all about crafting principles and policies to enable AI innovation in a safe and compliant environment.

What steps can CROs take now?

Don’t wait for certainty. AI regulations in Australia are a way off and use cases are evolving rapidly. Start putting in place the enabling infrastructure needed to manage AI risk and opportunity now.

  1. Define and establish a responsibility framework to help the organisation understand its capabilities and limitations in pursuing opportunities with new AI technology. Ensure appropriate personnel are in place to perform control, oversight and monitoring activities.
  2. Establish your firm's risk appetite and get it approved by your Board.  Update and refine your firm’s policies, standards and guidelines for AI use and adoption accordingly.  This is less about creating policy documents and more about clearly defining what it means to develop and deploy AI solutions, whether they are built in-house or sourced from third parties.
  3. Focus on evolving your control framework and control capability to explicitly consider AI risks. As new AI risks emerge, you’ll need to evolve your existing controls in parallel as well as develop entirely new controls.

AI is full of opportunity for the financial services sector. But how you manage and approach the risks will determine the safety and success of your AI adoption. 

This topic and other developments are discussed at Baringa’s CRO Symposium events attended by the CROs of many of Australia’s largest superannuation funds.

Get in touch

If you'd like to know more about this topic or Baringa's CRO Symposium, please contact us.

Our Experts

Related Insights

Contact us

Find out what we can do for you...

Get in touch

Does kindness in business pay?

Find out in our Economics of Kindness series