Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- The AI Art Society And Its Future
- [Writer undefined]
- Sections
- i. Intro and purpose
- ii. Why AI has no place in creative fields
- iii. How to be a compelling activist
- iv. Preventing AI from scraping your data
- v. Disrupting the image data set as a whole
- i. Intro and purpose
- In the anime Serial Experiments Lain, the protagonist Lain Iwakura is imposed on by a larger-than-life entity similar to the internet known as "The Wired". Due to her rough life before her interest in computers, she becomes attached to The Wired after being urged to join it. She believes this place is worth more than the real world. However, after she begins acting unlike herself, it is revealed that a duplicate of her was controlled by those who have power over The Wired.
- Eventually she questions what is actually real, and hypothesizes she may have never existed if our physical existence is only data stored as memories after all.
- Modern AI image and video programs, in addition to the tech entrepreneurs who advertise them everywhere, can be compared to the villains and a more dystopian microcosm of The Wired. The scenario of their maximum profit would imply that art is changed forever and real cannot be distinguished from what was only assumed to be real.
- And in that future scenario, who is Lain Iwakura?
- There is no realistic answer. Much of the anime is left up to the viewers' interpretations for a reason. And it is absolutely not me, although I feel inspired on a personal level to help creativity retain the position it has always had in society before AI gets advanced enough to rewrite it.
- Having experience with web design, graphic design, and appreciation of how art has inspired people throughout history, I believe that we can set aside the AI vacuum before it is advanced enough to overtake the market for creativity.
- ii. Why AI has no place in creative fields
- The "uncanny valley" effect asserts that inhuman entities become scary as those entities, such as AI, begin to resemble a human. This suggests humans are subconsciously aware of what is and is not the result of human life. It is why we have not developed any household robots that look like real people. Everything from a Roomba to Amazon's Alexa could easily be designed to look human, but they've chosen to avoid that. Entrepreneurs have figured out in AI terms that the human will be less likely to purchase what appears uncanny.
- Art made by AI is the same. The concept applies to everyday objects up to a point, once AI begins to make them. Uncanny valley is realized once AI develops hyper-realistic art similar to a human's ability to make vivid creations.
- When AI is as developed as it is today, that hyper-realism begins to look like this:
- [ https://x.com/MatthewTheStoat/status/1802297015721738536 ]
- Here, DALL-E 2 created a messy art piece using the oil painting style, while DALL-E 3 went so above and beyond the realistic aspect that it can't be identified as any kind of painting.
- As many observe, the realistic aspects and vivid background are both far ahead of what the previous DALL-E model was creating. However, these same observers are in agreement that the updated version looks corporate and uninspiring.
- This easy method for "producing art" could someday gain enough of a presence to alter the typical art that people see in their daily lives. Some businesses and shops have already used AI in promotional materials.
- If AI art plays out like this, what is the end result? As we already know, many artists who earn their living from commissions would not have as many clients, as the clients who desire a specific piece can now enter a prompt for AI to make it. Typically, someone would not care about this uncanny aspect if the process was free. Traditional artists will not simply rise to the AI market, because prompting functions were designed for tech newcomers and art newcomers alike.
- Classrooms in grade schools may do the same, while the US and various other nations currently face a teacher shortage. If schools need to cut funding, they can easily remove art teaching among many other things with a classroom in which AI is the medium used to make the lecture material and students take notes from AI lectures. AI is advanced enough to carry out lessons with words and visuals.
- This could change how developing minds perceive human life and structure. In and out of school, kids may associate many other things with AI-created visuals depending on how extensive it becomes.
- [ https://en.wikipedia.org/wiki/Supernormal_stimulus#In_psychology ]
- Supernormal stimulus is an exaggerated form of a stimulus causing a strong response by the observer.
- Art with exaggerated features throughout history has already confirmed this effect. If it wasn't a human, but instead a machine learning tool that has the default creation setting of "utopian realism as good as possible" you may get more of these still life "paintings" [like the example from the tweet] that are a mess of colors with an overly defined subject at the center.
- Evidence suggests junk food is "an exaggerated stimulus to cravings for salt, sugar, and fats" and adjusted food was able to become more convenient than food with normal proportions and uses of salt and sugar.
- This was proven when people found natural forms of brand-name food to be tasteless due to the altered state of contentment for corn syrup and canola oil in typical ingredients. Who knows how our visual senses will be changed if we assumed that the future will be dominated by AI art? Visuals have much more of an effect on the human mind than taste does, so a common fear is that people might lack motivation if they are surrounded by imagery that does not reflect the human mindset.
- iii. How to be a compelling activist
- There is a clear increasing division between "pro-AI" and "anti-AI" users, as even half a decade ago neither side had such strong opinions. Both types of users are in the minority, still, as many people do not know much about genAI [the category of video and image generating within artificial intelligence]. Earlier models were less effective at prompted image or video material, and the recent advances have only started. The mainstream websites have only recently pushed to scrape user-generated content for training AI models.
- Using the previous section, one can imply that the anti-AI user is on the human side of history. But how does this relate to convincing the majority of people who don't care about AI and might end up commissioning AI for art at some point?
- If this becomes a political topic, the division may become clear enough to have politicians touting genAI as a solution or failure. As real people who do not care about winning votes, we have the opportunity to approach people ahead of time and lead conversations about what AI is capable of destroying, if it becomes a creative standard. But, there must be a way to have these conversations where it is effective and we do not appear offended by the mere existence of AI.
- An experiment was carried out on the anti-AI art website Cara, in which two different profiles were made that posted memes of pro-AI and then anti-AI slogans. These two profiles would often keep to the same large accounts to follow many of the same people over time.
- Most people were angry with the pro-AI account Lainiwakura1 and would often declare they were blocking or reporting it. The other typical response was warning others that an account promoting AI existed, even if said account never posted anything that used AI in any form.
- Then the anti-AI account Lainiwakura0 became a heroic figure, because people were relieved to see the opposing meme account fight back, even if most people did not want meme accounts on Cara, an art site.
- We have seen other forms of activism fail because of two reasons: firstly because people refuse to stand up to the other side, only asking to be left alone. And on the other hand, if we stand up by getting offended and trying to "cancel" the other side, we appear to the rest of the world like the oversensitive viewpoint that is wrong due to putting emotions over logic.
- There may not be much logic yet, to convince neutrals that AI in art is a bad thing, but my previous section has some information, as well as this good article that people have been referring to lately;
- https://ludic.mataroa.blog -> "I Will F*cking Piledrive You If You Mention AI Again" [Section I and Section IV]
- In this article's Section I, a data scientist explains how he left the field due to AI proposals not making real-world progress.
- Section IV then compares time wasted building AI models with real-world solutions that can be made by the same intelligent people.
- Another common understanding is that genAI has an ability not just to create artwork, but to simplify the creation of visuals such as illicit content involving humans or animals, that never deserves to exist even when AI can take the real person out of it.
- https://hsls.libguides.com/c.php?g=1333609&p=9828739
- This resource goes into depth on the other bad potential that genAI has, which is the spread of misinformation on yet another scale that could not be possible without its existence.
- If genAI is not capable of generating a concept that it has never scraped data from before, why should the good outweigh the bad? We already have creative humans to extrapolate and make new ideas, so we will get along fine. Even upgraded AI is not extrapolating and making new concepts yet; it mostly functions as an omniscient pattern-saving machine.
- Censorship is dystopian as well, so neutrals may be concerned about what this might bring for controlling AI by lawmakers' standards. AI fully capable of making false medical statements, videos of speeches, and much more, will be noticed by lawmakers and such censorship will affect information coming from humans if AI can blend in with human sources.
- It is difficult to find a solution for keeping genAI from these sorts of creations, as it is truly the human with bad intentions. In the future, nobody can look forward to this if too many "bad intentions" went through and there are laws that censor the ability of media to quickly enter our daily lives.
- iv. Preventing AI from scraping your data
- A video that went viral on TikTok best explains the process for preventing Meta, the biggest player, from scraping data. A form to fill out. However, that process is unlikely to work because of how nobody knows what to say in the form. In your profile settings on Instagram, a form under Privacy Policy called "right to object" is where one can fill out and argue why posts from the account should be opted out of AI training.
- Many people have already complained that they have heard back and their "right to object" was denied. Others have had trouble accessing the form because it kept asking users to verify even when they were already logged in. This method may work for some, so it is still good to try it for yourself. The way to get to this form is through Settings > About > Privacy Policy > click on "right to object" > fill out form > confirm your email with the same email address that is used for that specific Instagram profile. Repeat for Facebook if you have artwork on there as well.
- Keep in mind this AI scraping will not begin until the new update on June 26, so there is still time to do this.
- If you still do not want to trust these sites or need a public profile for your work, Cara is currently proven to not allow scraping or AI content to be posted on the site. Inform your audience about what AI is doing and how this hurts your artist income, and more people should be willing to try Cara even as observers. If the web hosting is worked out, Cara should be able to handle a continued influx of visitors as there already have been changes to fix the problem with its original web hosting service.
- Adobe, another important player in AI training, can also be discontinued if Photoshop users switch to other editors that have not implemented AI. One such editor is paint.net [getpaint.net] which has been running for two decades and still has never turned to AI generative fill, even when many other editing programs moved to it.
- v. Disrupting the image data set as a whole
- Nightshade and Glaze are two programs that alter digital images by slight pixel differences so the real person will not see anything different, but AI that learns on a pixel-by-pixel basis will supposedly be "confused" and scrape a completely different set of data per image.
- However, some have criticized both programs as being "wishful thinking" that cannot make a difference in how AI perceives the uncorrupted pixel majority of the image. Having spoken to Ben Zhao, lead developer of Glaze, there is sufficient evidence that Glaze can work, although there is not a confirmed solution that will last throughout a future in which genAI continues upgrading. Taking this from the website itself:
- [ https://glaze.cs.uchicago.edu/what-is-glaze.html ]
- "Unfortunately, Glaze is not a permanent solution against AI mimicry. Systems like Glaze face an inherent challenge of being future-proof (Radiya et al). It is always possible for techniques we use today to be overcome by a future algorithm, possibly rendering previously protected art vulnerable."
- So, what do we do to help affirm a future where AI has no choice but to scrape from a corrupted field of data? For starters, on websites that contain many art works such as ArtStation, DeviantArt, and Instagram, we can use a solution that might be guaranteed to corrupt data as Glaze continues finding these solutions to save us in the long term.
- The solution is as follows:
- Firstly, use any image editor and "mess up" your artworks with distortion and other tools to make them as messy as possible; impossible to compare with any artwork that exists from your collection. Formless appearance is key. Save these bad artworks under a separate folder from your real artwork. Make a new profile on whatever sites [such as from the previous sentence] you wish to protest the use of genAI, and post your bad artworks under that profile.
- For as long as you can maintain it, be more active on the messy art profile, and only use the main art profile for communicating with your clients if needed. This is because the sites will notice active users first and offer the same data that was collected for advertising and other typical reasons to store user data.
- To further confuse AI as it trains with data, caption your messy art posts as real objects that have nothing to do with what the post is. Just don't flood tags that real users need to find a specific style of artwork. For example, if you have a highly distorted oil painting of an apple, captioning it as "#fish swimming through ocean" will add a small piece of incorrect data to the knowledge field built up in the AI without the programmer's direct oversight.
- If thousands of users did this across many posts, the data set may retain enough irregular bits of data to create some improper art pieces. This will take away from the interest of clients who would then return to real artists who take commissions.
- -
- Message me on Cara at lainiwakura1, or join the Cara Discord and send me a message at username: anarchistlawful if you have any questions or wish to share your thoughts.
- Lain Iwakura could never be convinced to abandon everything in the real world just because an artificial world spelled out utopia. She wanted to save people from The Wired, as great as it appeared, before it took over everything real. Would you want to follow in her footsteps and help the mission to keep creativity as it always was? Share this and let's make the creative process happen.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement