You may have noticed the social media spark. AI painting technology, which has been around for many years, has been back in the hot seat these past few months.
It is indeed an eventful time in the field of AI painting. Both the painting generated with the AI tool Midjourney which won the top prize at the Colorado State Fair, and the closure of the Japanese AI painting website mimic, which was fiercely boycotted by local artists just after its launch was announced, have triggered this craze.
However, the origin of the controversy seems to go back to 2021, when a Stanford University paper proposed a model that improved the algorithms of today’s AI painting tools, bringing the quality and efficiency of the output up a notch.
With the additional algorithms, the active open source of AI painting models has also significantly lowered the barrier to participation. For example, Stable Diffusion, built by Stability AI, is now completely open to users. OpenAI released Dall-E 2 on 28 September and is offering a free trial. The niche art that was on the radar of auction houses and collectors not so long ago can now be created by anyone.
The high efficiency of AI tools, its exquisite outcomes, and the explosion of the AI painting craze, have led artists to engage in this round of discussions. Many are expressing concern about the reduction in employment this may cause.
“Younger artists may decide to use these systems rather than getting guidance from someone more experienced. This pulls them away from their industry predecessors,” said Grzegorz Rutkowski, a Polish game illustrator for TechTarget.
“At the same time, the technology may prompt organisations to stop hiring junior artists and illustrators to create emotionally stimulating visuals,” he added.
Is AI-generated art a threat to the career path of budding artists? The Voice of London sought out several young artists and asked their views on AI painting techniques:
As tricky as it is for young artists, can we currently rely on special rules and laws for digital works to protect the intellectual property (IP) of the original creator?
The status quo is disappointing. Despite the fact that the various open-source communities have clearly published rules relating to IP, including to prohibit its use for adult content, hate or violent images, and avoidance of copyrighted material. However, as the number of users increases and the threshold drops, violations seem to be inevitable.
The legislation is even less able to keep up with the rapidly advancing AI technology. Whether AI-generated works have intellectual property rights or copyrights? Whether images created by AI can be taken for commercial use? And how to deal with images generated using AI that violate the law? Relevant laws and regulations are still in the discussion stage worldwide.
The organisers of the Colorado State Fair did not penalise the author of the winning entry after seeing the controversy, as the rules of the competition did not prohibit the use of AI. They plan to discuss with the art world how the competition would be evaluated next year.
Meanwhile, Pixiv, Japan’s largest online art community, announced a section for AI painting on the site as a response to the controversy about mimic by domestic painters. But it seems from the experience of various open source AI applications that community rules are far from enough.
The lack of legal protection for AI-generated content adds to the difficulty of defending intellectual property rights. Governments and institutions should start doing things to try to fill this gap. With legal guarantees, human artists might be able to seek cooperation with AI within boundaries, rather than fragmentation and conflict.
Words: Rui Liang | Subbing: Anna Kamocsai | Video: Rui Liang