It'll be the usual conflating of climate with weather. Courtesy of the sage Adams via Deep Thought: 42
I’m not claiming “write a great novel” as a prompt in ChatGPT will turn out a bestseller, or any of those other things. Nor am I arguing it’s a sentient person. No, it’s AI, which we’ve had for decades, going back to the earliest computer games. Don’t strawman me. My argument is just that not all AI currently under discussion is plagiarism - unless you go with the broadest, most general, least useful definition which means nearly every writer ever is guilty of plagiarism in some way.
This gets a bit philosophical but I'd argue that human creativity is nothing but rearranging previously collected information in different ways as well, so I am not so sure anymore that we can really claim that what machine learning algorithms do are significantly different with respect to what we do. The main differences between the instances in which programs are successful and the ones in which they are not seems to simply be the amount and the complexity of the information that needs to be digested, and the structure of the patterns that need to be recognized.
I’d argue that not only is that not, in my opinion, how human creativity works but that no major theory of creative cognition actually posits a mechanism that would allow the way machine learning “creates” to truly be classified as such. Even the theories that are the least generous to human ingenuity include a subconscious or unconscious recombination of elements in a way that machine learning literally cannot do, to say nothing of theories which allow for outright novel ideas. I submit to you that on the contrary, we are inclined to be charitable to a purely mechanistic view of creativity that facilitates this tech grift - and as Myron notes, this is transparently tech grift - purely because the mechanics of capital always already imply that man and machine are interchangeable.
Are we missing the forest for the trees here? If the complaint is that machines are putting creatives out of work, is the best retort to criticize the machines for not actually being creative? To me that’s kind of like criticizing a mechanized automotive manufacturing plant for not actually caring about the cars it’s putting together: it’s irrelevant if the computers are able to approximate human performance in the area it’s specialized in. Call it a tech grift or whatever, I don’t see how calling out the tactics of the corporate overlords actually… you know, gets anywhere near a solution for the people being put out of work. It hasn’t worked in any of the other fields, to my knowledge, that the presence of machines have eliminated human jobs.
I’m focusing on what I care about from a “shooting the breeze while at work” perspective, which is the cognition question. If I had a solution to the bigger problem I wouldn’t be wasting my time on a debate thread about machine learning on a Star Wars forum.
@Ramza why would it matter whether it's conscious or subconscious? It's all still information being processed, in one form or another. Machine learning creative process would never be driven by the same emotions while making the piece of art -so the path to come up with the output would surely be far from the one of humans- but it doesn't necessarily mean that the end result would be distinguishable. The same reasoning applies to chess and Starcraft, I don't see why novels and paintings should be an exception.
Point. Isn’t it like a big science or science fiction question about whether machines are actually creative vs being able to convincingly simulate that it’s creative and how much the difference matters?
And if we really want to get philosophical about minds… https://en.m.wikipedia.org/wiki/Problem_of_other_minds The problem of other minds is a philosophical problem traditionally stated as the following epistemological question: Given that I can only observe the behavior of others, how can I know that others have minds? The problem is that knowledge of other minds is always indirect.
All you pro AI people need to understand that these machine learning things actually DON'T do as well as creative people in making creative things. Hell the link that started this discussion was about how it outpaces "creative thinkers" in problem solving ... But problem solving isn't the same thing as creative writing. At all. And we don't get moved by a book or film or show because it creates the most original plot we've never seen anywhere else. Plot is easy. It's the characters, the journeys, the relationships that make us feel something. And that's something that these machine learning models don't and can't understand.
And I rather believe they can understand it, and they likely will, in a matter of a few decades. It's just more complicated than chess, because a "move", as well as "winning", is harder to define for artworks. But ultimately it can probably be all reduced to some complex learning task, like everything else.
A genuine artificial intelligence would produce material that would arguably be not understandable by humanity, but I digress - again, that's an utter misnomer here. This reductive description of the creative process is so utterly ridiculous, I have to follow in Dr. Johnson's footsteps and kick a rock. In my lifetime, business and other non-creatives have always tried to "crack the code" on the artistic process, and this is the latest salvo on this endless war, a Theranos-level grift where techbros make outlandish promises to get funding to make the promises actually come true, except it will never happen. Nor, like I said, will it actually "solve" problems. It can't even do basic math.
The issue is spontaneity. If you reject the premise that original thought is possible (something I think is absurd) then you have to account for spontaneous creative impulses. Machine learning is not spontaneous by definition. You mention art as a reproducible process, but that’s not novel to the machine learning age in the first place, see eg… I dunno, basically everything Walter Benjamin ever wrote. I think there’s a genuine question as to whether novels written by ML could ever be great art because on some level there is an absence of creative and emotional impulse. Though at that point you’re veering less into creativity and more into “what is good art?” which… I guess people can do, I never liked aesthetics too much. That’s just the cognition equivalent of shutting down discussions about our perceptions with brain in a vat arguments, consistently invoked about as disingenuously. You don’t care about p-zombies, you care about making whatever you want to be an AI. Tellingly the reverse necessary implication - that someone might misinterpret a machine p-zombie as an AI - is never invoked. Because this is always the game: make the world safe for me to call something an AI, and **** you if you question it.
It's not about Pro-AI versus Anti-AI. It's about Ethical use of AI as a tool, versus Unethical use of AI as a tool, versus trying to blanket-ban all AI. Yes, actual human writers do a lot better than most of these programs. At least for now. And I'd personally prefer a story written by a human than an AI right now, even if all else is equal. But that's moving the goalposts from the original argument, which is that not all AI models / language programs / whatever you want to call it, are all pure plagiarism. Also, it doesn't matter if the machine models can't/don't understand, they can still be useful tools right now, with human oversight, to create compelling characters, journeys, and relationships that make us feel something. They don't need to understand in order to be a good tool in service of those goals. Not every model is created equally. There have been some already used by scientists that they found helpful.
In my understanding it’s mostly just sleight of hand based on the sheer quantity of data scraped. The chances of getting an output you haven’t seen before is very very high, but none of it is actually new. The chatbot can’t iterate because it’s a crappy Chinese Room. Youll never get Tolkien from one these things by feeding it a bunch of folklore without Tolkien actually being one of the inputs because it’s incapable of writing something new. You can get a jumbled mishmashed “something in the style of Tolkien” as an output now because the bot has scraped an almost unfathomable amount of knockoffs and fanfics and it will use that data to fart out something you haven’t seen before, people mistake this for creativity when really it’s just that the bot is pulling from an input they are unfamiliar with.
The problem is with how these bots are known to work you will have less open, free flowing data online - if it is published it will be protected against acquisition by bots - that's the logical response to this. As malware saw anti-malware response the same applies here. Where there is an info resource to draw on to synthesise an answer, the bots can problem solve to that limited degree. Going further into innovating a new solution using creativity? That's some way away. As to ethical vs unethical, US capitalism sees no value in the former. It's the Silicon Valley outlook of move fast, break stuff, leg it to get away with it that is driving this - that's the biggest problem but it's one way beyond any of us posting here.
I believe that, as sad as it might sound, the way we create art is either (a) by adding personal experiences to the stuff that is already available (for example, combining the love novels we previously read with the new information provided by, say, the personal experience of getting married ourselves) or (b) by taking some established premises and bringing them to their logical consequences, which is just a form of computation after all. The way we are insightful is by using information we personally collected, that the listener doesn't have. This makes me kind of sad, but I don't think we humans are really able to actually produce genuinely original information, that is not just an elaboration of inputs we collected.
isn't it like the Turing test? I mean if some AI does write a novel and people are convinced it was a person, then what's the difference?
My point is that even if we accept your premise that humans are not able to produce genuinely original information - you are not the first person to theorize this and there is no consensus - there is a spontaneous and unconscious element to the recombination process of creativity that machine learning can’t truly replicate. An AI could, but there is no such thing, at least at the moment (I think maybe never but the theories supporting “never” are often unsatisfying). I’d settle for people studying the right mathematics at this point. “Computers are already creative!” Me, who has seen the limitations of automated proof technology outside of pure foundations: “Er…”
Apparently none of you have heard the story of John Henry. He drove his steam drill 15 feet, whereas the steam drill could only do 9. If you want to pump out a lot of crap, computers can do that for you. But superior performance will always be reserved to humanity. What's more, it's highly reasonable to suspect that some future individual acting in the same mold will again break the sway that machines have over humans. If you are truly worried about this, all of your efforts should go towards producing this John Henry II.
The difference is that if you don’t agree with Turing it’s probably because you don’t think the alleged novel that fools all of the people all of the time is possible in the first place. Turing was a great mathematician but much of his philosophical work is rooted in an underlying assumption that man is indistinguishable from a package of rote mechanics, which IMO has caught on because mostly because better theories necessarily raise questions about the structure of our society in a way that pop culture finds less palatable.