Given ChatGPT’s now-infamous capacity to generate its own legal opinions, complete with official-looking, but entirely confabulated, citations and quotations, it is not surprising that courts remain skeptical of its use in judicial proceedings. And as the lawyers in J.G. v. New York City Dep't of Educ. recently discovered, practitioners should not be surprised by courts’ outright rejection of its use—especially when the lawyers do not explain what inputs were provided to the chatbot.
In J.G. v. New York City Dep't of Educ. (S.D.N.Y. Feb. 22, 2024) the plaintiff filed two due process complaints with the New York City Department of Education (DOE) alleging the DOE had failed to provide his son with a free appropriate public education. After the plaintiff prevailed in multiple administrative hearings, plaintiff’s attorneys, the Cuddy Law Firm, filed a motion in the Southern District of New York, seeking prevailing party attorneys’ fees pursuant to the Individuals with Disabilities Education Act.
As part of their motion, the Cuddy Law Firm asserted that their hourly rates were “reasonable.” In support of this, the Cuddy Law cited primarily to four sources: “(1) the Real Rate Report conducted by Wolters Kluwer; (2) the 2022 Litigation Hourly Rate Survey and Report conducted by the National Association of Legal Fee Analysis (“NALFA”); (3) the 50th Annual Survey of Law Firm Economics (“ASLFE”); and (4) the Laffey Matrix [a commonly used fee matrix for lawyers who practice federal litigation in Washington D.C.].”
As a “cross-check” of its sources, the Cuddy Law Firm also cited feedback it received from ChatGPT-4. The court, it is safe to say, was unimpressed. The court found that the Cuddy Law Firm’s invocation of ChatGPT as support for its fee petition was “utterly and unusuallyunpersuasive” and that “treating ChatGPT's conclusions as a useful gauge of the reasonable billing rate for the work of a lawyer with a particular background carrying out a bespoke assignment for a client in a niche practice area was misbegotten at the jump.” (emphases added).
In support for its holding, the court first noted that multiple courts in the Second Circuit had recently reprimanded counsel for relying on ChatGPT where the chatbot had proved unable to distinguish between real and fictitious case citations. The court then noted that the Cuddy Law Firm (i) did not identify the inputs on which ChatGPT relied, (ii) did not reveal whether any of its inputs were imaginary, and (iii) did not reveal whether ChatGPT had considered the “uniform bloc of precedent in which courts in this District and Circuit have rejected as excessive” the billing rates that the Cuddy Law Firm urged.
The court thus “reject[ed] out of hand” ChatGPT’s conclusions as to the appropriate billing rates. The court concluded its analysis with a frank admonishment to the Cuddy Law Firm: “Barring a paradigm shift in the reliability of this tool, the Cuddy Law Firm is well advised to excise references to ChatGPT from future fee applications.”
While in many ways ChatGPT is an extraordinary resource, lawyers should not yet expect courts to place much weight on the content it generates, especially when the lawyers fail to disclose what bodies of information the chatbot is using to generate that content.