This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minutes read

Can I Wear an Apple Watch at the Masters?

ChatGPT’s stunning confidence—and brazen fabrication of sources—offers a cautionary tale to lawyers and clients.

As most know by now, ChatGPT is an AI tool that can, among other things, whip up e-mails, letters and other correspondence, and provide responses to questions on any topic imaginable. In generating content, the bot taps into its vast repository of web sources and various other materials fed to it by its creators. ChatGPT will not guarantee that its output is accurate, which is expected given its limitations. ChatGPT is a "Large Language Model" ("LLM" or "Large LM"). LLMs "digest huge quantities of text data and infer relationships between words within the text"; fundamentally, LLMs are trained to predict "a word in a sequence of words" by using the "most statistically probable word given the surrounding context." Because of how LLMs are trained, LLMs "often express unintended behaviors such as making up facts, generating biased or toxic text, or simply not following user instructions" (Bender et al., 2021; Bommasani et al., 2021; Kenton et al., 2021; Weidinger et al., 2021; Tamkin et al., 2021; Gehman et al., 2020). This is because the language modeling objective used for many recent large LMs—predicting the next token on a webpage from the internet—is different from the objective “follow the user’s instructions helpfully and safely” (Radford et al., 2019; Brown et al., 2020; Fedus et al., 2021; Rae et al., 2021; Thoppilan et al., 2022).

Recently, I took a few spins through ChatGPT as a means of determining how we might leverage the tool in our legal practice, or even one day be replaced by it. 

Any hopes I had for its ability to streamline our legal services, as well as any fears I had of being replaced by it, were quickly quashed. Until AI bots can consistently produce a reasonably acceptable level of reliable information, legal practitioners and clients should use AI resources judiciously (if at all) and be aware of their limitations and pitfalls.

As I might have expected given the known potential for its flaws, ChatGPT:

  • Provided inaccurate information
  • Contradicted itself repeatedly
  • Fabricated citations to support its responses

ChatGPT didn't just do these things a few times; ChatGPT relentlessly steered me down, around and back again through a meandering path of madness and confusion. 

It started with such a straightforward question, and a softball at that:

Heading to Sunday at the Masters as a first-timer at Augusta National, I'm well aware of the strict cell phone ban, but I wasn’t sure whether I could wear my Apple Watch. 

Turns out the bot wasn’t sure, either, nor was it sure about cell phones.

If I had only known what was about to happen, I'm extremely confident I would have closed my laptop and set a hammer to it.

What happened, you ask? I questioned the bot's baffling insistence that Augusta National permits cell phones—a position it backed up with broken links to non-existent articles all dated within the past handful of years and all titled with some variation of "Augusta National Has Finally Dropped Its Cell Phone Ban" in quotation marks.

It didn't take long for me to realize maybe questioning the bot is not "done" with ChatGPT?


Yeaaaaaah. 

Maybe it's not "done" because that's when things quickly started going haywire. It became clear that everything the bot asserted as gospel required verification because the bot started making no sense. What unfolded during our discussions (err, arguments?) about cell phones at the Masters went something like this:

And this:


And also of course this:


More specifically, during these discussions, the bot trotted out the following five divergent positions on Augusta National's cell phone policy, each at random times, retracted them all at some point, and retracted its retraction of them, sometimes multiple times per position:

  • Augusta National permits cell phones.
  • Augusta National began permitting cell phones in recent years and I have several (convincing-looking but actually completely made up) sources to back this up.
  • Augusta National no longer permits cell phones, but it did a couple years back (and to support this, here's a broken link to a Masters Press Release that never existed with a title in quotation marks along the lines of "Mobile Phones Are Now Permitted at Augusta National" to make this look legitimate, and if that doesn't convince you, I have a dozen more where that came from, including from NYT and WSJ because if I can't go big, I better go home).
  • I have no information suggesting phones have ever been allowed, so I'm not sure; sorry.
  • I do have information suggesting phones have never been allowed; of course they’ve never been allowed; everyone knows this; Augusta National has always held firm in its no-phones stance; clearly you shouldn’t go to the Masters if you didn’t already know this very basic fact that everyone else in the world knows; do you even golf?

This bot needs some serious R&D. And I had been worried about bots demolishing our entire trade?

If your head isn't already spinning, be sure to check out some screengrabs of the back-and-forth between me and the bot, which I've included at the bottom of this post for readers who have enough stamina and mental fortitude to plow through the mindbender that is ChatGPT.


Screengrabs for the Unwary

PSA: You can leave now and no one will know you stopped reading right here. 

Joking banter with the bot (which amused me a lot more before I knew bots can't process sarcasm) The bot's flipping and flopping

Bad bot, making up stuff




Still a bad bot, still making up stuff (aren't you supposed to learn and grow every time we talk?)