Human Creativity and the Latest AI
Some in the commentariat are proclaiming the biggest story of 2022 was the explosion of OpenAI, Stable Diffusion, and other companies’ incredible new artificial intelligence products based on generative diffusion, noise, and huge data models. Indeed, I tried writing this Warp & Woof post with Open AI’s ChatGPT. It was perfect in every way, except that it was nonsense! Or, at least, banal.
Perhaps I need greater skill and experience with prompting the AI. But ChatGPT’s response to my original directive: “Write 1000 words about creativity and AI in the style of Warp & Woof blog,” faltered only in not recognizing my Warp & Woof blog style in its data model. A failing which undoubtedly would be remedied were I a more prolific author! I also noticed a total lack of sources or authentication. I guess I needed to ask for that. Still, an issue to ponder (copyright?). In addition, the sophistication of its language, despite perfect grammar and mechanics of sentences and paragraphs, was probably no greater than that of a typical college freshman. Take this sample paragraph:
“One of the most significant ways in which AI is impacting creativity is through the use of machine learning algorithms. These algorithms can be trained to identify patterns and make predictions based on large amounts of data, which is incredibly useful for tasks such as image and speech recognition. This capability is also being used to generate new and unique content, such as music and art. For example, AI-powered music composition software can analyze a large corpus of music to understand common chord progressions, melody lines, and other elements of music theory. It can then use this knowledge to generate new pieces of music that are similar in style to the training data but still unique and original.”
If I were a college English instructor, I’d be hard-pressed to give this even a “B.” So, is this the best we can do for creative writing? The answer is clearly no. But it goes far beyond where we were two years ago – before public release of these large language models. There is rich literature of dreams of artificial intelligence. From the medieval homunculus, which alchemists claimed they could create artificially from “putrefied sperm,” to science fiction literature of the 20th century – think HAL 9000 from 2001: A Space Odyssey or Isaac Asimov’s “Three Laws of Robotics” (1942) – the entire concept of self-replicating machines has always been fraught with concern about the potential danger of such technology. Could the robots wipe out humanity? Or make us their slaves? To say nothing of simply taking away our jobs!
Undeterred, the promoters at OpenAI are busy marketing their products. ChatGPT is the natural language interface to their large language model GPT-3, which will soon be replaced by GPT-4, presumably even more capable. And OpenAI is also in the image creation business with DALL-E 2. It is what is known as a visual diffusion model, relying on training via interjecting noise into the instructions (it’s stochastic, I guess). Likewise, as my ChatGPT blog writer asserted in that sample paragraph above, music composition, code generation, and many other things can easily be accommodated in these models.
So, what is human creativity anyway? What we call “imagination” in human beings is not just a stimulus/response process of interaction with large quantities of data. What philosophers of aesthetics call “frisson” must be felt, on an emotional level, by both artist and audience. There may be mathematical models for this, but the current flurry of generative AI models notably lack any semblance of this frisson. Their music or art may be pleasant to experience, their writing easy or informative to read, perhaps even funny or entertaining. But can it surprise? Can it excite? Can it generate anger or sadness or joy?
Until it crosses that line, AI will have no hope of replacing human creativity. Yes, it can augment that creativity. It can be the servant, but not the master, of the creative person. Of course, it may only be a matter of time before that threshold is crossed. We’ll see what GPT-4 is like later this year. And Microsoft, Meta, Alphabet, and other big tech giants are apparently “betting the ranch” on something huge coming soon. (OpenAI, itself, is on the verge of becoming a Microsoft subsidiary after the just-announced $10B investment from the Redmond megalith.) In the U.S. Congress, there is an “AI Caucus” (new members and leadership in the House for the 118th Congress – but likely including my Congressman, Don Beyer of VA). There are, of course, big economic concerns regarding employment, investment, and national security here. Indeed, something like Asimov’s 80-year-old Three Laws may need an enforcement mechanism as we go forward. The laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In the near term, it’s easy to see ChatGPT replacing Google search due to its superior natural language interface. But asking it to write a blog post or a novel seems, in January 2023, to remain a stretch. And be sure to specify in your prompt that you want sources cited! I even tried asking it to “rank 10 best published sources on issue of ‘creativity’ in generative AI” – it gave me 10 sources in numerical order 1-10! How did it rank them?
My experience with ChatGPT left me with that one sobering thought. Hidden under the hood of these large language models, visual diffusion models; or, for that matter, even relatively primitive algorithms like those driving Facebook or Twitter, is there anything to hold the generators of these models accountable? This problem goes even further than Asimov’s laws. We must ask: who is the arbiter of quality of this machine output? Of truth versus falsehood? Are we forced to judge quality ourselves? It seems the roles of reviewers and academics – always subjective — can nrver be diminished. We’re stuck with them! That means we’re stuck with truth which will ultimately be … subjective!
— William Sundwick