People who know more about AI art find it less ethical
When people understand the system and process behind AI art, its ethical implications become harder to accept

Malte Mueller/Getty Images
A year ago, at Christie’s auction house in New York City, auctioneers sold an unusual collection of artworks: surreal paintings, photorealistic images and cartoon-inspired compositions, all generated by artificial intelligence. There was a reaction to the first incident of its kind. Over 6,000 artists protested that the AI models used to create these works were trained on copyrighted images without the creator’s consent. while the auction house was argued The works demonstrated “human agency in the age of AI”, with critics seeing the event as an example of an industry rushing to commercialize technology built on uncompensated creative labour.
Other artistic and professional communities are also concerned. A report released last November found that more than half the novelists People surveyed in Britain thought AI could destroy their careers. And it seems that audiences have complicated feelings about the technology, too. As one survey found, many Americans agree with AI as a tool for creative professionals, but not as a replacement for their work.
However, a viewer’s comfort with AI art may depend on how much they know about how it is created. I study neuroaesthetics, a field that connects neuroscience, psychology, and our perception of beauty and art. My colleagues and I have found that the more people learn about the backend of AI – the datasets, the training process, the signals – the less comfortable they are with the ethical considerations surrounding these creations and the value of AI-generated pieces.
On supporting science journalism
If you enjoyed this article, consider supporting our award-winning journalism Subscribing By purchasing a subscription, you are helping ensure a future of impactful stories about the discoveries and ideas shaping our world today.
I became curious about AI because its rapid proliferation in the art world began to highlight the gap between what the technology is and what people know about it. Previous research has shown that people give low ratings to AI art creativity, price And emotional depth. And in my own work, I studied how knowledge about art changes the way we see. This led me to wonder whether knowledge about AI shapes people’s judgment about AI-generated art and might help explain the bias often seen against it. To investigate this, my colleagues and I organized three experimentsEach consists of 100 participants. We started by presenting people with AI-generated art images and asking questions about their ethics and aesthetic value. For example, participants in two of these experiments had to evaluate how ethically acceptable it was to use AI to produce such art, earn money or reputation from these works, and label them as traditional art. People also had to rate how much they aesthetically appreciated the images we presented.
In the first experiment, we showed our participants 20 landscapes and 20 paintings that were drawn using DALL-E 3 with prompts based on the Impressionist art of Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art without any additional context. The other half received a brief lesson that gave them more information. It read:
“This image was generated by an AI algorithm that generates images from text descriptors. To accomplish this, several steps are required. First, the AI algorithm is trained by learning large datasets of art images and their associated text descriptors, such as the artist’s name. Then, the AI algorithm creates new images based on various textual cues (for example, the artist’s name, artistic style, whether he or she depicts seascapes, landscapes, or people) Capable of generating images.”
The additional information made a difference. When people learned how AI systems operated, they viewed AI art images as less ethically acceptable, especially when financial gain and artistic appreciation were involved in the creation of these images. But the aesthetic appeal of the images did not change, which suggests that learning how AI works makes people consider ethics rather than aesthetics.
Psychologists have found that people’s judgments about what is good or valuable can change when they learn that something has earned awards or praise from experts. For example, authority bias makes us more inclined to agree with those who appear to be in charge or knowledgeable. Additionally, signs of success such as or reputation Can lead people to see something as more morally good. In our second study, we told a group of participants that certain images of AI art were being displayed, sold, or admired. But we were surprised to find that sharing the success of a work did not improve the moral acceptability of these images in the eyes of those who had learned how these works were created.
In the final experiment, we tested people’s automated judgments about AI-made versus human-made art. We used a tool from psychology called the go/no-go association task, in which people are asked to very quickly associate one type of cue, such as an image, with another, such as the word “good” or “bad.” In this experiment, we showed participants images (which were either AI-generated or human-generated Impressionist paintings), with object-category labels (“AI art” or “human art”) on the left and attribute labels (such as “good” or “bad”) on the right. Participants had to click a button if the image and label were aligned, and refrain from responding when they were not. This task needs to be completed quickly and after several trials as a way to capture people’s most immediate engagement. We worked with people who had no additional education on AI to find out what the average person might think.
We found no strong automatic tendency to view AI or human art as inherently better or worse. This finding tells us that people do not yet have an instant reaction or deep opinion about AI, unlike human art. It also emphasizes that, as our earlier experiments suggested, moral resistance to AI art is something people learn over time.
Overall, when people know how AI works, they become more careful in assessing its ethical fairness. This shows that educating audiences, artists, curators, and policy makers about how technology works can shape the future of technology in the art world. Artists working with AI tools can help this effort by sharing information about the models, data, or signals they used, and by making clear where their own human hand guided the process. Although such transparency may draw criticisms, it can also build credibility and equip people with the tools to think critically about the technology.
It’s time to stand up for science
If you enjoyed this article, I would like to ask for your support. scientific American He has served as an advocate for science and industry for 180 years, and right now may be the most important moment in that two-century history.
i have been one scientific American I’ve been a member since I was 12, and it’s helped shape the way I see the world. Science Always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does the same for you.
if you agree scientific AmericanYou help ensure that our coverage focuses on meaningful research and discovery; We have the resources to report on decisions that put laboratories across America at risk; And that we support both emerging and working scientists at a time when the value of science is too often recognised.
In return, you get the news you need, Captivating podcasts, great infographics, Don’t miss the newsletter, be sure to watch the video, Challenging games, and the best writing and reporting from the world of science. you can even Gift a membership to someone.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in that mission.
