7
Best humans still outperform artificial intelligence in a creative divergent thinking task - Scientific Reports
www.nature.comCreativity has traditionally been considered an ability exclusive to human beings. However, the rapid development of artificial intelligence (AI) has resulted in generative AI chatbots that can produce high-quality artworks, raising questions about the differences between human and machine creativity. In this study, we compared the creativity of humans (n = 256) with that of three current AI chatbots using the alternate uses task (AUT), which is the most used divergent thinking task. Participants were asked to generate uncommon and creative uses for everyday objects. On average, the AI chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. However, the best human ideas still matched or exceed those of the chatbots. While this study highlights the potential of AI as a tool to enhance creativity, it also underscores the unique and complex nature of human creativity that may be difficult to fully replicate or surpass with AI technology. The study provides insights into the relationship between human and machine creativity, which is related to important questions about the future of creative work in the age of AI.
Calling modern AI a “spellchecker” suggests a significant misunderstanding of how these things actually operate and what they do.
A spellchecker takes input from humans and uses that input to match against a database of known words to suggest correct words using that word’s proximity to the known words. Modern spellcheckers are able to tokenize a corpus of words written by the device’s owner and use that corpus to determine what word is likely to follow the previous word. Most phones these days do this.
Modern AI takes a corpus of data, tokenizes it, feeds each token into a neuro-network to determine the next token that is likely to follow the previous token.
Graphical AIs do similar work but there’s more variables to alter to “weigh” what pixel value would likely be present based on surrounding pixel values and the noise present in that seed, along with the other values. The corpus in this case would be a library of digital graphical works that is interpreted as a graphical work (e.g., a matrix of pixel color values). Sound AIs work similarily but with digitized sound as data.
What do I misunderstand?
You misunderstand how the sheer magnitude and scale of the process makes it different.
Sorry. BIG spellchecker.
LARGE language model.
YUGE AI boi.
Better words. Better pictures. Better sound.
Totally different.
(/S)