- within Finance and Banking topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
- in Australia
- with readers working within the Banking & Credit, Business & Consumer Services and Insurance industries
Background
The prevalence of AI in all areas of commercial, as well as personal, usage is undeniable. And that prevalence is only going to increase.
This prevalence is clearly driven by cost and delivery efficiencies, and perhaps above all, the need to, as far as possible, provide as comprehensive a net to corral a vast universe of information.
One clear area of potential application, precisely for the above reasons, lies in the area of financial advice. We are primarily talking here about the development of AI engines in the financial services sector which will host AI models designed to assist clients in their financial services journey, without being dispensers of financial advice.
On this front, it is ironic that a person seeking some form of guidance in relation to their financial affairs can go to a general AI model and obtain output which is not regulated, but if they were accessing ostensibly the same service from a regulated (AFSL) entity, they will have a very different journey for the reasons we canvas in this article.
Regulatory "wrinkles"
Personal financial advice
The main regulatory "wrinkle" is of course the risk that an AI model operated by a financial product issuer or advice licensee will be giving, or deemed to be giving, personal advice. This risk materialises because:
- The statutory definition of "financial advice" has a subjective limb which, applied to the present context, distils into whether the AI output contains a recommendation or statement of opinion (or report) which is intended to influence a person in making a decision in relation to a particular financial product. One could conceivably argue that an AI model is not necessarily intending to have this effect. However, the statutory definition also contains a second objective limb, based on whether a recommendation, statement of opinion or report could reasonably be regarded as being intended to have such an influence.
- The statutory definition of "personal advice" would be triggered where the AI model has taken into account one or more of the person's objectives, financial situation or needs. The ASIC v Westpac decision in the High Court demonstrates that it is sufficient for the personal advice regulation to be triggered if just one of the trio (of the person's objectives, financial situation or needs) has been considered.
- Realistically, the relevant licenced financial product issuer or adviser is likely to have personal information in its possession which could readily form the basis of a reasonable expectation on the part of the customer that their personal information has been taken into account.
- And indeed, should the individual be called on to input personal information into the AI "genie" in order for it to provide output, then the first limb of the definition of personal financial product advice will also likely be activated.
The major glitch where a personal advice obligation is activated is the sheer difficultly of an AI model to comply with the best interests obligation under Chapter 7 of the Corporations Act. For example, an AI model is likely to not be designed to conduct interrogatory functions with the user to ensure that the personal advice is appropriate and in the best interests of the user (which is more a potential feature of other digital device tools like calculators).
Inaccurate output
Another not insubstantial risk of the use of an AI model in this context is the risk that the model comes up with a somewhat inaccurate, incomplete or otherwise unintelligent response. This again is a real risk as we know that AI models can tend to get creative (as in spurting out fictitious material, fabulous or fable-like), if not downright funky (otherwise referred to as "hallucinations"). And there is little to no room for funk in the area of regulation of financial services. Undoubtedly, the provider of the AI model will have potential liability in respect of any such funk which may emerge.
That liability might arise in one or more areas of misleading or deceptive conduct, liability in negligence or indeed the provision of the AI model may be seen to be, or be part of, a financial service and in that case, attract potential liability in respect of licence conditions, such as the obligation to act efficiently, honestly and fairly.
Does the provision of an AI model constitute a financial service?
We have previously canvassed this area of law. Because the activities that constitute a financial service are well defined under the Corporations Act to refer to specific activities (which include the provision of financial advice), the starting point should be that one takes a delineated, narrower, view of the ambit of the concept of a financial service. However, inevitably, courts have, albeit perhaps unconsciously, tended to take a more "spakfilla" approach; viewing the concept of a financial service more holistically.
Of course, much of the analysis will inevitably depend on the specific activities under consideration.
Marketing content in the AI model
While it is suggested that the AI model should stray away from personal advice, it is worth exploring how the AI would navigate the different pathways of the provision of information only on the one hand, versus general advice on the other.
Before we drill down into this discussion, a cautionary note around personal advice should be sounded. In a human interaction, regulatory guardrails would normally be put in place, usually in the form of scripting designed to ensure boundaries between the provision of information only, the provision of general advice and the provision of personal advice.
Some, if not many, of these guardrails need to bring to bear a high degree of sophistication; not just in the formulation of the delineation between these paradigms in the patter of the adviser, but also in terms of pre-empting what questions a client could ask, which either explicitly or implicitly would trigger personal advice if responded to by the advisers. For example, questions like:
- "is this product actually advantageous to someone like me?";
- or even, "do you think this product could really be of assistance to me?".
Usually, scripting would use a combination of disclaimers and guardrails to avoid personal advice being activated. This said, there is usually a degree of judgment relied on in the role of the adviser to assist the inadvertent provision of personal advice.
This is going to be difficult to program in the AI model. Even if it could be done, the provision of constant disclaimers could minimise the utility of the output.
At the same time, the relevant product issuer or financial adviser providing the AI model will want to say something about the advantages or virtues of a particular product line unless the AI model proceeds along a purely educational or product agnostic pathway. In this sense, the assumed objectivity of an AI model that we have grown accustomed to where AI is purely a knowledge tool would need to be supplemented or adapted to accommodate marketing content.
There could be various ways of achieving this; for example, through the AI model being able to refer to, and extract, from the content of a product issuer's product disclosure statement. That could be through the extraction of information only pertaining to the relevant financial product or it could involve the extraction of content which constitutes an express or implied recommendation of the product in the form of general advice.
As we know, discussion of the merits of a particular financial product can easily fall into the general advice concept.
From there, one would need to be sure that the AI model cannot amalgamate such a recommendation with any learned knowledge of the client's personal circumstances, such that with such a combination, personal advice would be triggered.
One potential solution could be to ensure that only "static" AI models are used (i.e. those that operate on a specific dataset without the ability to learn or change its behaviour based on interactions with users). However, this approach also poses its own challenges as, in an area such as financial advice that is constantly evolving, how can you ensure users are getting the most up-to-date and correct information if the AI model is not continually learning? Moreover, this limited AI model may not meet the provider's commercial objectives.
Additionally, one would need to consider the misleading or deceptive aspects of such an interface through an AI model. Would the concept of non-literal representations (i.e. "mere puffery") survive or work the same way in the AI world? For example, what would be the effect where the AI model emits content along the lines of "this product is the best product in the universe" or statements which are in the realms of funk like "Elvis would have only used this product".
An element underpinning this theme is whether AI models are supposed to be more clinical or encyclopedia in nature, such that users would be more likely to expect that they dispense an objective, scientific view of the world.
Can disclosure or disclaimers clarify any regulatory uncertainty?
This is of course the sixty-four dollar question.
While it is difficult to predict exactly how a court would opine on these issues, our view is:
- where the AI model does not actually take any of a person's personal circumstances into consideration, the use of a disclaimer is relevant to the second objective limb of personal financial advice; and
- this is because a clear disclosure that the AI model is not intending to, and does not, fulfill these functions (and provided this is not counteracted by what the AI model is actually doing) is an important factor in terms of what a client could reasonably expect.
Of course, as noted above, all will ultimately turn on the specific activities and output of the AI model. Similarly in the case of potential inaccuracies or incomplete output spawned by the AI model, disclosures could be designed which would qualify any such liability.
However, a couple of observations should be made in this context.
As flagged earlier, forms of disclaimer may need to be so dramatic that effectively the utility of the AI model will be diminished drastically; for example, disclaimers such as:
The output of this tool may be inaccurate, incomplete or otherwise inappropriate for your desired usage and therefore should not be relied on. The provider will not be liable for any loss or harm whatsoever caused by its usage whatsoever.
Hopefully a leaner type of disclaimer could be acceptable, such as:
Users accept that this tool is for guidance purposes only; does not constitute financial advice in relation to its users, does not take their personal circumstances into account and the provider does not warrant it and will not be responsible for the accuracy or completeness of the output received.
What is the role in financial services of "real" advisers and "reality" in the context of the unreality of artificial intelligence?
It may be obvious that the regulatory wrinkles that we have identified will, in the absence of legislative change, mean that AI models are unlikely to be anywhere near as autonomous as self-driving cars for the purposes of providing financial advice.
Aside from the human steering implicit in their construction and launch, they, by their design, will not supplant the role of the human adviser.
But by the same token, they should be able to play a role in supplementing the advice functions. First, they can provide a directional pointer to the client, particularly in the sense of facilitating the client's access to information, enabling them to better interface with an adviser. Second, they should be capable of being used in tandem with an adviser either concurrently or subsequently to the output being provided via the AI model to the client. There should, in theory, be efficiencies, including cost efficiencies resulting from this synergy.
It might be argued that this conclusion will not always ring true and that as AI progressively improves, super intelligent AI models, such as "Artificial General Intelligence" or "AGI" will emerge, which can truly bridge the gap between machine and human in a sense reaching the full potential of AI – in other words releasing the AI genie from the bottle.
But be that as it may, the role of the human financial adviser is presently very safe, not just because of the fact that many AI models will not be able to readily "do financial advice" under their present functionality, but also because of the functional boundaries imposed by the current (and foreseeable) regulatory regime.
Note 1: This article has not been written by, or contributed to, by an AI model or bot.
Note 2: Do stay tuned for our follow-up article on: What is the future of AI advice engines under the current regulatory regime? Here, we will explore, among other things, whether the law will or should allow the content and operation of an AI engine to operate separately from the provider of the engine for the purposes of knowledge attribution of the client's personal circumstances.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.