Quick-wit, slang, and wordplay are the 0s and 1s of the grand language that is humor. Historically, bards, jokers, and, arguably, the greatest writer of all time: Shakespeare, each played a part in making their community chuckle. Now, software developers are trying to teach artificial intelligence how to crack a joke.
Because it can’t think for itself, AI learns by making up formulas (algorithms) to try and understand the data that humans give it. AI’s lack of consciousness comes from a modification made deliberately by its developers to avoid a science-fiction like end to the world as we know it. However, these safety buffers also mean that anything produced by AI is, by nature, detached– making humor difficult to teach.
The complexity of humor is outlined in a research article titled, “The First Joke: Exploring the Evolutionary Origins of Humor.” In their article, Joseph Polimeni and Jeffrey P. Reiss write, “Whether something is funny or not is often dependent on nuanced verbal phrasing in combination with a full appreciation of prevailing social dynamics [experience].”
Experience lends itself to context, and context lends itself to good humor.
Take pizza: skip the bread and it’s a dip, skip the sauce and its cheese bread, skip the cheese and its WeightWatchers. For a bot (devoid of human experiences like pizza or diet-culture), this pizza joke just isn’t funny. There needs to be a middleman between joke and bot to explain the significance of the themes discussed in the joke. That middleman is natural language processing (NLP). NLP helps AI develop a human-like understanding of text and speech. For the purpose of understanding humor, there is a subdivision of NLP called sentiment analysis- a study of the ambiguities of language.
The technology corporation IBM, describes sentiment analysis as “attempts [by AI] to extract subjective qualities—attitudes, emotions, sarcasm—from text.”
Natural language processing is the knight in shining armor that’s come to save AI from its own monotony. Its noble steed is transfer learning, a process of feeding comically large amounts of data (in the form of things like scripts from late-night shows) to AI training models. The idea is that with enough exposure to experts in the field, AI will be able to recognize patterns and emulate them.
Joe Toplyn, a comedy writer for late night TV, has developed Witscript, a bot that utilizes the GPT API, NLP, and prompt chaining to generate original jokes. Toplyn crafted six formulas that anyone can follow to write a joke. Those six were translated into algorithms for Witscript, and correspond with the A-F joke outputs for every user-entered topic sentence.
In my trial of Witscript (initiated by an inquiry from the official website), I had Toplyn enter the prompt, “my friend just got in a car accident and the insurance company says it won’t pay her.” The most notable response was: “Well, I guess she won’t be in good hands after all” (playing on the company slogan of Allstate insurance- “you’re in good hands”). Witscript’s response demonstrates a nuanced understanding of human emotions in context and insurance company slogans, serving testament to its comic potential.
An ethical problem that stems from successful, open-source chatbots (like Chat-GPT and Witscript), however, is that of regulating their use in academic and professional settings. The central query for establishing guidelines has been, “Should this technology be embraced or discouraged?” Resulting discussions center around copyright laws, potential infringements, and the nature of AI training data (scraped from the internet), steering the general consensus on AI use in the direction of minimal to none.
A funny bot, though unassuming, is indicative of the vast potential that AI has in commercial use. For example, in the age of a dwindling health care workforce, AI chatbots can take over in online patient-provider-like interactions– indistinguishable from their human counterparts. Though no Adam Sandler (yet), robot comedians have set the bar high for future chatbots to come.