What's the correct way to cite ChatGPT as a source?

I’m writing an academic paper and need to reference information generated by ChatGPT. My professor asked for proper sourcing, but I’m not sure how to do this for an AI tool. Can someone explain the right format or share an example of how to cite ChatGPT in American English style?

Oh, the joys of modern academia where you can cite literal robots now. Here’s the deal: Most formal style guides (APA, MLA, Chicago) have had to scramble to keep up because, well, AI’s moving faster than my willpower on a Monday morning. Anyway, for APA (7th edition), the suggested format is like this:

OpenAI. (2023). ChatGPT (June 13 Version) [Large language model]. https://chat.openai.com/

In your text you’d say something like, ChatGPT (OpenAI, 2023) generated the response.

MLA? Try this mess:

ChatGPT. “Response to [your prompt].” OpenAI, 13 June 2023, https://chat.openai.com/chat.

Footnote in Chicago? Goes a little like:

OpenAI. ChatGPT. June 13, 2023. https://chat.openai.com/chat.

But heads up: Some profs still refuse to accept AI as a source, full stop. So double check with your instructor before you waste hours formatting your citation like you’re deciphering the Da Vinci Code, only to have it scratched out in red pen anyway. Also, always include the prompt you gave ChatGPT in an appendix, since transparency and all that.

TL;DR: Cite like it’s software, use their official company as the “author,” mention the version/date, slap that URL in, and annotate your prompt somewhere. Pray your professor’s actually willing to accept AI sources. Welcome to academia, 2024 style.

Honestly, citing ChatGPT in an academic paper is like referencing Merlin for a science experiment—cool, but most old-schoolers aren’t going to be impressed. I think @voyageurdubois nailed the main formats the major style guides tentatively suggest, but here’s where I’ll diverge: A lot of folks forget these guides change with the wind, and what was ‘acceptable’ yesterday might get you side-eyed tomorrow. For example, I’ve seen recent APA guidance recommending including not just the date/version, but also the exact wording of your AI prompt either directly in-text or, at the very least, as a supplementary file (depending on what you’re submitting to). Why? Because transparency is all anyone cares about now, and apparently, “AI hallucinations” aren’t as cute as they sound.

Also, worth flagging: in disciplines heavy on peer review, AI-generated content citations could be sniffed out as “non-reproducible”—meaning, if someone uses your same prompt a month later, ChatGPT might give a whole different answer. That reproducibility issue could make your citation risky for anything beyond a throwaway discussion point.

As for me, I got burned last term submitting an essay with a ChatGPT citation in MLA. My instructor docked me for 'inappropriate use of emergent technology as a primary source.” Fun times. Anyway, always double check your profs and department policy, ‘cause even if APA says “sure, try it,” your school might be stuck in 1995.

Bottom line: cite like it’s weird software, full disclosure on what you asked it, and maybe prepare a backup explanation for why you’re referencing a chatbot instead of a peer-reviewed human. Academic life in the 2020s: never boring, mostly just confusing.

Here’s the hard truth: citing ChatGPT is the Wild West—right now, anyway. Sure, APA and MLA are rewriting rules as they go (shout-out to @viaggiatoresolare and @voyageurdubois for their real-world tales of being dinged for using AI), but I’ll throw a different wrench in here: most style guides don’t actually want you to use ChatGPT as a citable “source.” Why? Because AI output is by nature non-static and non-reproducible––you could ask the same question five minutes later and get a whole new essay. So even when the format seems clear (company as author, date, prompt, etc.), you’re still citing something that fundamentally lacks verifiability.

Pros of citing ChatGPT:

  • Instant info and language generation, great for inspiration.
  • Nimble for brainstorming when you’re blocked.

Cons:

  • Your evidence might disappear or change; it’s not permanent.
  • Many journals and professors see AI output as non-authoritative, more like using Wikipedia five years ago.
  • Peer reviewers could question transparency and reproducibility—major credibility hurdle.

Competitors have dropped good format notes, but neither tackled why you might skirt citing ChatGPT entirely. If you can, track down a real source the AI references—or better yet, use it to guide you to scholarly material rather than make it the crux of your argument. If forced, cite as software, include your prompt, and be ready to defend why you used it instead of a primary scholarly article. It’s a tool, not an oracle. Academic life: use with caution, cite with skepticism.