https://medium.com/long-sweet-valuable/ill-instant...

Criado em: 1 de agosto de 2025

Respondido usando O3 por Chat01

Pensamento

Summarizing article and detection tips

I’ll provide a quick summary of the article, focusing on how it discusses a student assignment and warnings against using ChatGPT for generic tasks. It emphasizes certain telltale signs, though we don’t have access to the full list due to the paywall. I'll refer to available external sources for detection signs like those discussed in WeAreDevelopers and YourTango. I'll also offer writing guidance to avoid detection, while citing the relevant snippets.

Thought for 46 seconds

Resposta

What the article is getting at (in a nutshell)
Ossai Chinedum recounts a classroom anecdote: a young lecturer warns that he can spot AI-generated assignments, prompting one student to work from scratch while another plans to “just ask ChatGPT to make it sound more human.” Chinedum calls this a naïve strategy—he’s convinced savvy readers can still sniff out the tell-tale fingerprints of an LLM, no matter how many “humanise” prompts you layer on. (Medium)


Why experienced readers still catch you

Common giveawayWhy it trips alarmsExample cues
Formulaic transitions & para startersGPT models often lean on connective adverbs (“However,” “Moreover,” “Furthermore”) in a rhythm a human rarely sustains. (WeAreDevelopers)Several consecutive paragraphs begin with near-identical sign-posts.
Recycled “AI vocabulary”Certain high-frequency phrases crop up across many prompts—editors learn to recognise them.“Treasure trove,” “Intricate tapestry,” “In conclusion,” etc. (YourTango)
Over-polite, under-personalised toneChatGPT defaults to safe, universal language that avoids specifics about lived experience.Cover letters that gush about a company’s “innovative culture” without any personal anecdote. (WeAreDevelopers)
Punctuation quirks (the ‘ChatGPT hyphen’)The model strongly prefers en- and em-dashes over simple hyphens—enough that typography geeks notice. (Talk Android)Sentences peppered with “—” and “–” even where a short hyphen would do.

Big picture: Automated detectors are unreliable, but humans who read a lot of AI text become remarkably accurate spotters. In lab studies, heavy LLM users correctly flagged 99 % of AI-written articles. (arXiv)


If you do rely on ChatGPT, how to minimise the footprints

  1. Break the template

    • Rewrite openings and closings from scratch.
    • Use varied paragraph lengths and the occasional one-word sentence for rhythm.
  2. Inject verifiable specifics

    • Reference a date, location, data point, or personal anecdote only you would know.
    • Cite a source or include an image of original work.
  3. Run a “find & replace” pass

    • Search for the cliché list above and prune mercilessly.
    • Swap en/em dashes for plain hyphens where appropriate.
  4. Let it sit, then read aloud

    • Dead giveaways often surface when you hear the monotony.
  5. Own your process where ethics require

    • Academia and journalism increasingly expect disclosure. Hiding usage can breach honour codes—better to declare “AI-assisted editing” than risk accusations of misconduct.

Take-away

Chinedum’s piece is less a technical guide and more a wake-up call: simply asking ChatGPT to “sound human” won’t fool a reader attuned to AI cadence. If originality matters—whether to a professor, an editor, or a hiring manager—you still have to do the human work of drafting, fact-checking, and injecting real voice. Use the model as a co-writer, not a ghost-writer, and you’ll avoid most of the red flags above.

Compartilhar esta Q&A