LegalComplianceApril 20, 2026

Use of Artificial Intelligence (AI) in family law proceedings

AI is changing the way in which lawyers work. It offers relief from tedious tasks such as indexing and reviewing disclosure, creating chronologies, summarising lengthy documents, tracking critical dates and writing simple letters and emails, but it also offers opportunities to speed up more complex tasks like legal research and drafting court documents.

Clients are using it and they expect their lawyers to also use it. As many law firms are using it, lawyers who feel uncomfortable using it will likely be left behind.

The use of AI is not, however, without adverse effects. AI can make errors of fact or law and even create hallucinogenic case citations. The use of AI is predicted to result in fewer jobs for law students, legal assistants and junior lawyers, thereby reducing training opportunities. Clients may expect a reduction in legal costs, although the cost of AI software can be high and checking AI created work still takes time.

Some clients are able to use AI to give clearer instructions, whilst others use AI to create lengthy “fluff” which recites principles but lacks the substance of clear instructions. Clients using AI to obtain legal advice may believe that the advice given by AI is more accurate than that given by a lawyer and they may place less value on the work done by lawyers.

Amidst all of this, the use of AI is being considered by the courts, including the Federal Circuit and Family Court of Australia (FCFCOA). A Practice Direction is expected soon.


Table of contents


The use of AI in family law matters

There have been a number of cases in the FCFCOA where legal practitioners have been referred to regulatory authorities for the improper use of AI in proceedings under the Family Law Act 1975 (Cth) (FLA). This has commonly occurred where documents have been filed which contain hallucinogenic or erroneous citations.

The difficulties which arise with using AI in court documents were summarised by the Full Court of the FCFCOA in Helmold & Mariya (No 2) ((2025) FLC ¶94-272 (at [8]-[9]), a case where a self-represented litigant had used AI in the preparation of a Summary of Argument. Justices Aldridge, Campton and Christie referred to ethical issues that AI usage may dredge up, including misleading the court or an opponent, not verifying research and the breach of Pt XIVB of the FLA when client details are inserted into open AI programs:

“Legal professionals have specific ethical obligations to ensure that the written material placed before the Court is accurate. As Bell CJ observed in May v Costaras [2025] NSWCA 178, all litigants (including those who appear on their own behalf) are under a duty not to mislead the Court or their opponent. Reliance upon unverified research generated by AI has the capacity to confuse, to create unnecessary complexity, to result in wasted time and to mislead the Court and other parties.

A specific issue arises in the context of family law litigation, by operation of Pt XIVB of the Family Law Act 1975 (Cth) (‘the Act’). If a person inputs court documents into an open AI program, we consider that this may have the potential to fall foul of the provisions which prohibit communication of an account of proceedings to the public or a section of the public. In a similar vein, input of documents arising out of the proceedings into a generative AI program which stores, collates and replicates data may waive privilege or fall foul of the requirements that certain matters be treated as commercial in confidence. These issues warrant extreme caution.”

Dayal

In Dayal [2024] FedCFamC2F 1166, Judge A Humphreys referred the conduct of a legal practitioner to the Office of the Victorian Legal Services Board and Commissioner. Dayal has been referred to favourably in a number of cases including by the Full Court of the FCFCOA in Mertz & Mertz (No 3) (2025) FLC ¶94-285; [2025] FedCFamC1A 222, discussed below.

The solicitor in Dayal tendered to the court a list and summary of legal authorities that did not exist. The solicitor informed the court that the list and summary were prepared using an AI tool incorporated in legal practice management software. The solicitor acknowledged he did not verify the accuracy of the information generated by the research tool before submitting it to the court.

Judge A Humphreys discussed the risks of using AI, saying (at [10]-[12]):

“The use of technology is an integral part of efficient modern legal practice. At the frontier of technological advances in legal practice and the conduct of litigation is the use of AI. Whilst the use of AI tools offer opportunities for legal practitioners, it also comes with significant risks.

Relevantly to this case, the USA District Court case of Mata v Avianca Inc (Mata v. Avianca, Inc, 678 F.Supp.3d 443 (S.D.N.Y. 2023)) drew worldwide attention to the risk of relying on generative AI for research purposes in litigation without independent verification. In that case, attorneys of a firm who relied on generative AI to prepare legal submissions which were filed referring to non-existent cases, and initially stood by the submissions when called into question by the court, were found to have abandoned their professional responsibilities and sanctioned. The USA District Court outlined the potential harms flowing from the filing of bogus submissions in its judgment as follows (at [448]):

‘Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.’

The potential harms identified by the USA District Court apply to the reliance on non-existent authorities in this court.”

Her Honour referred to the guidelines issued by courts other than the FCFCOA and then referred to the professional standards required of legal practitioners as the solicitor had potentially breached rr 3, 19.1, 4.1 and 4 of the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015. The duties of Victorian solicitors (and other legal practitioners covered by the Uniform Law) include:

“(a) The paramount duty to the court and to the administration of justice, [r 3.1] which includes a specific duty not to deceive or knowingly or recklessly mislead the court [r 19.1];

(b) Other fundamental ethical duties, including to deliver legal services competently and diligently [r 4.1]; and

(c) To not engage in conduct which is likely to diminish public confidence in the administration of justice or bring the legal profession into disrepute [4 4].”

Her Honour acknowledged the genuineness of the apology offered by the solicitor and the steps he had undertaken to mitigate the conduct, but concluded that she needed to refer the solicitor to the regulatory body. She also considered (at [21]) it was in the public interest for the regulatory body:

“to be aware of the professional conduct issues arising in this matter, given the increasing use of AI tools by legal practitioners in litigation more generally.”

Mertz & Mertz (No 3)

In Mertz & Mertz (No 3) (2025) FLC ¶94-285; [2025] FedCFamC1A 222, the Full Court of the FCFCOA (Aldridge, Carew & Behrens JJ) set out the reasons for making an earlier costs order of $36,955 in favour of the respondent to an appeal. The Full Court made a further order referring the conduct of the appellant’s legal representatives to the relevant professional bodies. By consent, an order was made that the appellant’s former solicitor pay the respondent a further $10,000 as costs thrown away for correcting the errors generated by AI.

AI had been used in the preparation of a Summary of Argument and a List of Authorities, resulting in the Court being given incorrect case references.

The solicitor denied using AI, blaming her paralegal. The court was impliedly critical that the solicitor did not identify which AI program or programs had been used and did not identify what, if any, training, supervision or guidance the paralegal had been given in relation to the use of AI. The solicitor said that she accepted full responsibility for the use of AI by her paralegal, and indicated the steps that she had taken to ensure the error did not occur again, including that she had terminated the paralegal’s services.

The Full Court quoted from Helmold & Mariya (No.2) and referred favourably to Dayal. In addition to the general duties of legal practitioners, the Full Court explained that entering client information into an open AI system may contravene laws prohibiting publication, including s 114Q of the Family Law Act 1975 (Cth) (“the Act”). The Full Court said (at [15]):

“There is a risk that entering draft documents into an AI program will result in a breach of s 114Q, a breach of the Harman undertaking, breaches of rules in respect of material produced under subpoena and/or give rise to a waiver of legal professional privilege. The Supreme Court of New South Wales, Practice Note SC Gen 23 – Use of Generative Artificial Intelligence (Gen AI), 28 January 2025 prohibits the entry into any Generative AI program of any information subject to non-publication or suppression orders, the Harman undertaking, material produced on subpoena, or any material that is the subject of a statutory prohibition upon publication unless the legal practitioner or person responsible for the conduct of the proceeding is satisfied as to confidentiality and other matters (para 9A)”.

The Harman undertaking was described by the High Court of Australia in Hearne v Street (2008) 235 CLR 125; [2008] HCA 36 (at [96]) as:

“Where one party to litigation is compelled, either by reason of a rule of court, or by reason of a specific order of the court, or otherwise, to disclose documents or information, the party obtaining the disclosure cannot, without the leave of the court, use it for any purpose other than that for which it was given unless it is received into evidence…” [footnotes removed]

The Full Court agreed (at [18]) with Judge A Humphreys in Dayal:

“…That is not the only purpose of referring the matter; as Judge A. Humphreys emphasised in Dayal, it is in the public interest that those regulating the legal profession are aware of examples of difficulties which have arisen from the use of AI in the preparation of Court documents.”

Pending the release of guidelines and practice directions being issued by the FCFCOA, the Full Court said that the following guidelines were abundantly clear (at [17]):

  1. “If AI is used to identify authorities for the purposes of any Court document, then it is incumbent on the author and those accepting responsibility for the document to verify that those authorities are both accurate and relevant to the proceedings. Each of the relevant legal practitioners conceded that they had not done so in respect of the original Summary of Argument and List of Authorities.
  2. Where AI is used more widely, such as to prepare the text of submissions, create footnotes or prepare a chronology, the same responsibility to ensure that the material is accurate and relevant arises.
  3. As the submissions of King’s Counsel and counsel in this matter acknowledge, the use of AI in the preparation of Summaries of Argument or other submissions does not absolve the author or person who accepts responsibility for the documents from any of their professional or ethical obligations to the Court or the administration of justice. To the contrary, it calls for careful interrogation of whether such use is appropriate, what disclosure is appropriate, and what safeguards must attend to its use.”

Jenson & Lockridge (No 2)

In Jenson & Lockridge (No 2) (2026) FLC ¶94-303, the appellant, who was self-represented, used AI to formulate the grounds of appeal, draft the Summary of Argument and draft the content of her oral arguments. The appellant explained that she had used a “professional” version of AI which was “used by law firms”.

Justice Campton referenced Helmold & Mariya and pointed out (at [7]), the difficulties which arose from the appellant’s “deployment” of AI, namely that it:

“(a) Attributed reasoning to the primary judge that does not appear in the reasons under challenge (referred to as AI producing hallucinations);

(b) Amalgamated disparate legal principles into the appellant’s Summary of Argument and oral submissions, contributing to the repetition of uncontextualised appellate phrases;

(c) Prosecuted complaints in the grounds of appeal that were either contrary to the appellant’s case at trial, or were abandoned by the end of the trial; and

(d) Obscured the remaining grounds of appeal, thwarting their forensic utility.”

Although there did not appear to have been a breach of s 114Q as occurred in Mertz & Mertz, his Honour said (at [6]):

“The conduct of this appeal illustrates the challenges presented to the Court and litigants from reliance on artificial intelligence, notwithstanding that it may present as offering expertise and efficiency....”

The appeal was dismissed. A modest costs order was made against the appellant. The use of AI did not appear to have been a factor in the making of the costs order or its quantum, emphasising the higher standard which legal practitioners must meet than lay people appearing before the court.

Tesar & Szep (No 3)

In Tesar & Szep (No 3) [2026] FedCFamC1F 21, the solicitor-advocate filed and relied upon submissions that contained hallucinated case citations. Justice Brasch said the use of AI “to generate those submissions, redolent with faulty and fictitious citations, was not the only issue where the Court was, frankly, misled by the solicitor-advocate” (at [6]).

Her Honour asked the solicitor-advocate whether he used his own personal research and exertion and checked each case. He responded “yes” in unequivocal terms. These responses were incorrect.

Despite the serious questions over his conduct and integrity, the solicitor-advocate nevertheless wanted to press on representing his client and have the jurisdictional issue determined that day. Justice Brasch said (at [14]):

“How he thought he could represent his client before this Court, where he had plainly misled the Court with respect to his initial ownership of his written submissions, then his volte-face, and, his use of AI hallucinated cases, is as perplexing as it is confounding.”

Her Honour endorsed Helmold & Mariya (No 2). She considered that the solicitor-advocate’s conduct raised potentially wider implications than misleading the court and competent and diligent delivery of services. Other concerns were:

  • the public confidence in the administration of justice; and
  • bringing the legal profession into disrepute.

Her Honour agreed with the views expressed in Dayal and Mertz that it was in the public interest that the regulatory authorities were aware of the problems arising from the use of AI in court documents and referred the solicitor-advocate’s conduct to the Commissioner of the Legal Services Commission of Queensland.

Authorities in other courts

Cases decided in other courts also provide guidance to legal practitioners as to the consequences of the misuse of AI. In Re Walker [2025] VSC 714 the legal practitioner was not referred to the Victorian Legal Services Commission. Instead, a reprimand was imposed by the Supreme Court, the Supreme Court holding that it was expedient that conduct be dealt with pursuant to the Court’s inherent jurisdiction in relation to the supervision of legal practitioners.

The solicitor used AI to draft written submissions filed with the Court, which relied on non-existent or hallucinated case references. Justice Moore said (at [79]):

“The rules of professional conduct also prohibit legal practitioners from engaging in conduct which is likely, to a material degree, to be prejudicial to, or diminish, public confidence in the administration of justice [42] (Rule 5.1.2(i) of the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015). I am satisfied that Ms Rizkallah’s failure to observe the requirements imposed by the Guidelines and the preparation and filing of submissions containing non-existent authorities generated by AI tools is conduct of this character. This is because, as stated by Elliott J in DPP v GR [2025] VSC 490, the accuracy of submissions made by counsel (and likewise by solicitors) is fundamental to the due administration of justice”.

The AI software used were Court Aid and ChatGPT, which the legal practitioner had not previously used. The legal practitioner was ignorant of their hallucinating tendencies and also of the Court’s Guidelines. This is unlikely to be an excuse which legal practitioners can continue to rely upon. The Courts will expect legal practitioners to understand the risks of AI and abide by the Court guidelines.

Weighing up considerations of general deterrence against proportion, Justice Moore held that there were a number of significant mitigating considerations, including (at [81]):

“…Ms Rizkallah’s conduct concerned a claim which was not pressed at trial; in terms of resources, her conduct therefore did not have any significant adverse consequences for the plaintiff or the Court. Ms Rizkallah promptly and unreservedly apologised to the Court for her conduct; I accept that she is genuinely contrite. There is no evidence that Ms Rizkallah has previously been found to have engaged in unprofessional conduct. I have also taken into account the urgent circumstances of pre-trial preparation in which the conduct occurred and her reliance on counsel. I also do not overlook the fact that the publication of these reasons in which Ms Rizkallah is identified is itself a significant adverse consequence for her professional standing”.

This case is not easily relatable to the family law context as the FCFCOA does not have the same inherent jurisdiction to regulate the conduct of legal practitioners.

In Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731 a junior solicitor used Google Scholar as the source of document citations in court documents and failed to check the citations. Justice Murphy considered that the publication of the solicitor’s name and the reasons for ordering that the costs of the respondents be paid by the appellants’ solicitors was sufficient censure of the solicitor. A costs order was made on an indemnity basis in relation to the costs incurred through the use of AI. Justice Murphy said (at [15]):

“…The error was centrally one of failing to check and verify the output of the search tool, which was contributed to by the inexperience of the junior solicitor and the failure of Mr Briggs to have systems in place to ensure that her work was appropriately supervised and checked. To censure those errors it is sufficient that these reasons be published”.

Practice directions

Whilst the FCFCOA had not, as at 15 April 2026, issued a Practice Direction regarding the use of AI, the guidelines in Mertz & Mertz (No 3) and the practice directions issued by other courts should be used as a guide. Practice Directions issued by Australian courts include:

  • Federal Court of Australia — Notice to the Profession – Artificial intelligence use in the Federal Court of Australia – 29 April 2025
  • Supreme Court of New South Wales — Supreme Court Practice Note SC Gen 23 – Use of Generative Artificial Intelligence (Gen AI) – 28 January 2025
  • Supreme Court of Victoria — Guidelines for Litigants: Responsible use of Artificial Intelligence in Litigation – May 2024
  • Supreme Court of Queensland — Practice Direction No 5 of 2025 – 24 September 2025
  • Supreme Court of Western Australia — Guidelines for the use of Generative AI

Other useful government information and resources has been issued by the Department of Industry & Sciences and can be found at https://www.digital.gov.au/ and https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence

Conclusion

Unlike the Supreme Courts of the States and Territories, the FCFCOA cannot sanction legal practitioners under its inherent jurisdiction. Where legal practitioners have used AI, and failed to check the accuracy of content produced, the FCFCOA is likely to continue to refer practitioners to the legal professional regulatory bodies.

There is no doubt that AI can make many legal tasks easier, but the dangers of using AI for researching and drafting court documents have been recognised by Australian courts. Not only can reliance on inaccurate information or citations amount to misleading the court, but inputting client information into open AI software could breach s 114Q FLA and the Harman undertaking, and amount to a waiver of legal professional privilege.

Besides the damage which can be done to a client’s case, the damage to a legal practitioner’s reputation, the risk of an indemnity costs order and referral to the relevant regulatory body are serious consequences.

The answer to the challenges of AI is not to avoid using it entirely, as clients will expect the productivity and efficiency gains to be passed onto them. Instead, it should be used judiciously and lawyers need to be aware of its limitations. Where it is used in the preparation of court documents, its output must be carefully checked. It is also imperative that law firms have AI policies and ensure that their staff follow them.

Jacky Campbell
Partner, Forte Family Lawyers
Jacky Campbell is Partner at Forte Family Lawyers, a leading specialist family law firm in Melbourne.
Back To Top