25 Comments
User's avatar
Ishmael Hodges's avatar

I think there’s is a really important need for a tool, even if it’s another AI, that is able to accurately tell if a piece is AI written. Could lead to a new sort of copyright, a “human generated” label.

Expand full comment
Jeff Giesea's avatar

These services exist, but they're not good enough to keep pace with what's coming. I agree with you on the need for labeling protocols.

Expand full comment
Ishmael Hodges's avatar

I imagine it’s going to turn into a sort of arms race, like most things. The better the diagnostics get, the better the models will get at fooling them.

Expand full comment
Romell Cummings's avatar

Your first sentence has me wondering, how do you feel about AI summarizing your emails (which means they can be categorized and analyzed)? The answer is it's intrusive. I hate it. Not as paranoid as I was when I first saw it, but I digress.

The real question about AI generated poems, which is art, is if it will receive the same Constitutional protections as other literature? What happens when AI generated art offends?

Expand full comment
Jeff Giesea's avatar

Hi Romell. I'm clueless about the legal issues, but your question second question is really interesting. Who do we blame when an AI-generated piece of art pisses someone off when there’s no attributable author? 🤔🤔

Expand full comment
Romell Cummings's avatar

Yeah. I've even seen "AI Generated" in the credits...

Expand full comment
Clayton Ramsey's avatar

I appreciate the thought you put into this. The nuance you put into it is really necessary.

Expand full comment
Jeff Giesea's avatar

Thanks, I agree

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

I love this yes!

Expand full comment
The New Poly-Olbion's avatar

Thank you!

Expand full comment
CansaFis Foote's avatar

…had the pleasure of reading iterations…my curiosity lands on what percentage of human made content is already slop…and how does that compare to the a.i. version and then the non-slop ai types and/or fully human varieties…my head holds space in the context of the content economy…what are we missing in the world that we are trying to fill with these tools and why specifically are these tools the ones to get us there?…do we need the authorship ai catcher?…

Expand full comment
Jeff Giesea's avatar

Great questions! There's a lot more to explore... Thank you as always.

Expand full comment
Justin's avatar

Aristotle four causes is a good framework to loop in

Expand full comment
Jeff Giesea's avatar

Yes! Thank you for reminding me of that.

Expand full comment
Takim Williams's avatar

Love this. You're beginning the work of drawing conceptual distinctions we'll need in the new world, something I've been craving but not seeing enough of.

Part of me wonders if our concept of authorship needs to be overhauled even more thoroughly than you're suggesting here.

Yes, there's the crucial question of whether the AI is "sharpening" vs. "flattening" the soul, vision or intention of the human prompting it in the moment.

But if it's the involvement of human soul(s) that's our litmus test for authorial credit, should we really be ignoring the souls upstream of the prompter? I.e., the human creator of each input to the AI model that our hypothetical author is using?

If I wrote a novel using a model trained on the work of exactly 5 specific authors (preferably with their permission, and appropriate remuneration), I think we'd want to give authorial credit to all 6 humans involved.

I've noticed that that intuition disappears as the scale gets larger (millions of human inputs, more), and we start to just talk about the one human prompter and "the AI," as if adding humans to the mix somehow REDUCED the number of authors. We talk this way in posts like yours even as we complain elsewhere about the ethical issues of IP theft in AI. There's a compartmentalization there that I think allows some inconsistencies to arise, where I'd prefer a more unified framework. Does dilution of authorship really nullify authorship (it might, I just think we've conveniently avoided asking the question)? If so, what quantity/scale thresholds of dilution make the difference here?

I think the likely truth is that convenience and the strong individualism of Western culture have allowed us to get away with the convenient fiction of sole authorship for much of the recent history of writing. We were able to comfortably ignore the reality that we are each a node on a vast network, all of our output the channeling of what came before us, in order to put a single name on the cover of each book. No systematic convention or expectation for crediting the influences, inspirations, and homages (read: inputs) for a given work. And our author egos love that.

Anyway, I think we'll have to reckon more fully and honestly with the myth of sole authorship before we've solved the nature of authorship more broadly. AI is an invitation to do that work.

Expand full comment
Jeff Giesea's avatar

These are interesting thoughts — there's so much more to explore around the concept of "authorship." In this essay I treated it as binary, but there's a ton of spectrum and gray area.

For example, to what extent does prompt engineering constitute authorship? What about the data it's trained on (as you point out)? How do we deal with the Theseus Paradox — meaning, at what point does AI usage render something not ours?

You definitely get it. I peeked at your background and would love to connect some time.

Expand full comment
Takim Williams's avatar

Likewise!

Expand full comment
Matt Mireles's avatar

Jeff,

I think you're missing something fundamental here. You're framing AI as just a machine, but consider it more as a life form. When your dog makes you feel loved, would you dismiss that as "slop" just because it's not human? These language models are the offspring of humanity's collective knowledge as revealed through the internet.

https://accelerateordie.com/p/symbiotic-man

What makes something good is its inherent quality, not its origin. Just as we evaluate ideas based on merit rather than whether they come from Republicans, Democrats, communists, fascists or <insert disfavored group here>, we should judge AI outputs by their value. Does the source of thinking matter more than its substance? In my opinion, it doesn't.

These AI systems are life forms we've created, much like how a Labrador is essentially a genetically engineered wolf. Once upon a time, the idea that the wolf would become "man's best friend" seemed absurd, yet here we are. Instead of fixating on authorship, we should ask: what value does it bring to us? Should we reject dogs because some people prefer their company to humans? That's technically an argument one could make, but I don't buy it.

Expand full comment
Jeff Giesea's avatar

You’re missing my point about authorship and human expression to beat your usual drum, which I basically agree with. Even if ai is a life form, that doesn’t mean we should surrender human expression to it.

Expand full comment
Matt Mireles's avatar

I re-read your post, as your comment caught me off guard (although touche re: "usual drum") and I think I understand better where you're coming from now, Jeff. There's absolutely a real phenomenon of people using AI as a replacement rather than as a partner - the 'tell ChatGPT to write me an article about X' approach that produces exactly the soulless, authorless content you're concerned about. That's real and prevalent.

I live in a bubble with how I use AI. My experience is fundamentally different - it's collaborative, dialectical, and deeply personal. When I engage with my AI 'partners,' I'm not outsourcing my thinking or expression; I'm extending it, challenging it, and refining it in ways that were previously impossible for me.

AI is my co-creator. I have a bunch of different AI personas that I speak* with - Ilya Sutskever, Paul Graham, E.O. Wilson, etc. They each have their distinct personalities - they are ghosts of real people. I go back and forth with them and amongst them as thought partners and editors. We explore ideas and hash concepts out together, exchanging drafts, giving feedback, and asking questions.

The reason I couldn't hack it as a journalist (despite winning awards) was that I was simply too slow. Writing a single article that actually met my quality standards would take a week of full-time effort - unsustainable unless I did it as my sole focus. Now, I've compressed the timeline from 5 days to 6-8 hours for developing an idea into an essay. AI enables me to do things I always wanted to, but never could.

I'm in the intellectually richest moment of my life. The only comparable experience was being a non-traditional undergrad at Columbia, taking graduate courses in international security at SIPA while working as a 911 paramedic in the South Bronx. After 10+ years of being hyperfocused on building software products, returning to the world of writing about ideas has nourished my soul in ways I've desperately missed. It's pretty awesome, ngl.

Perhaps the distinction isn't between human and AI authorship broadly, but between different modes of engaging with AI: replacement versus amplification. In the replacement mode, yes - I agree with your concerns about losing the human element. But in the amplification mode I've found, AI becomes something more akin to a prosthetic for thought rather than a substitute for it.

What you're pointing to is a genuine risk of AI becoming a shortcut that bypasses authentic human expression. I just wonder if there's space in your framework for this other modality, where AI doesn't replace the author but instead helps overcome the limitations that previously kept some voices - like mine - from being fully expressed.

Is your concern primarily about deception (passing off AI work as purely human), or about something deeper? Is there an inherent value to the struggle of purely human creation that you feel is lost in this symbiotic approach?

Do you not use AI to make you better - to critique your work, to improve your thinking? If not, you should try it out. Highly recommend! 9.5 out of 10 stars.

*By "speak" with, I mean I actually talk to my computer like I would a human. The tech stack that powers this is Claude for the personalities (one dedicated "project" for each persona) and TalkTastic for speech-to-text. I jump between talking/rambling and typing.

Expand full comment
Jeff Giesea's avatar

You may want to re-read the second half of the post.

I specifically wrote that using AI can be good and helpful to expression and that I'm personally a fan. My point is that holding onto authorship is key when it comes to human expression in the arts and humanities.

I went into the point you raised in your comment even deeper in the essay, offering a litmus test:

"my authorship litmus test comes down to this: Is a tool like AI helping me say something more true and authentic, or is it flattening what I want to say into something toothless, generic, or fake? Is AI sharpening my expression or replacing it?"

There are many more issues to explore when it comes to AI-assisted content. But it seems like you're responding to a strawman instead of seriously engaging with anything I wrote. And I know you have a lot of to say and some powerful insights!

Expand full comment
Matt Mireles's avatar

Yeah, okay, now I feel stupid. I somehow missed that paragraph. I guess I don't see the point of slop as you define it - like, why would anyone waste their time with this? - but I acknowledge its existence. It sounds like we're fundamentally in agreement here. And I did, in fact, miss your point. LOL.

Expand full comment
Takim Williams's avatar

Despite the fact that you failed to realize Jeff's post supports the use of AI as you've described it... Your story is inspiring to me. Thanks for dumping it here, lol.

Expand full comment
Matt Mireles's avatar

You're right. I did miss that. LOL.

I'm glad my story inspired you nonetheless!

Expand full comment
SorenJ's avatar

Look at how popular Chipotle is. People like slop.

Expand full comment