WWW.FASTCOMPANY.COM
Weirdcore 2.0s freaky aesthetic is taking over your feed
The sun is bright. A caucasian grandma sits on the grass. She smiles at the camera. Caresses two small dogs. Its a peaceful summer day. Three seconds later, your brain notices something is off. Her face morphs slowly. Her mouth twitches. Suddenly, the dogs turn into reptilesyellow Komodo dragons, maybe. One has two heads. They open their mouths wide. Granny starts picking their scales. And then she starts to eat them.The lovely video is now a nightmare, one that is as real as the original candid shot. I feel uneasy. I feel a pinch of horror. The bizarre tornado doesnt stop there: The woman, now Asian, leans forward as the slimy animals start moving, transforming into a Jet Ski that granny rides into a river, leaving the scene.I dont know what I just watched, but as I flick my finger up, I go deeper into this Instagram rabbit hole. There are more posts. Some of them are strange satires that play on the idea of the illuminati controlling the world featuring everyone from Donald Trump and Vladimir Putin to Kamala Harris and Elon Musk. Others show disgusting monsters that feel too close to reality. All live in the same uncanny valley that is as deep as the Mariana Trench. Suddenly Im trapped in this Bermuda Triangle of stupid, freakish, and odd, and I cant help but keep looking, feeling awe and disgust at the same time. View this post on Instagram A post shared by Lucas Miranda | INTELIGNCIA ARTIFICIAL (@lucasflame.ai)Im not alone in this twisted dimension. As Oslo-based interdisciplinary visual artist Edmond Yang tells me through Instagram: At the moment, two of my videos have over 300 million views combined, with an accumulated watch time of nearly 100 years. Its surreal to think Ive consumed a century of human attention.Yang tells me he has been working in visual design and communication for more than two decades. When I discovered generative AI and the powerful tools it brought with it, a whole new world opened up for me, he says. These tools allow me to visualize almost any idea in my head, all from my phone. What started with still images has evolved into videos as video generation models have quickly caught up, he says. Ive always loved creating and sharing work, but now I can do it faster and on a much bigger scale, seeing how people react and engage in almost real time. View this post on Instagram A post shared by The Dor Brothers (@thedorbrothers)Yang says he is only one of many artists who are using video AI to explore these unnerving fantasies, all loosely grouped under the #weirdcore tag. TikTok is full of them, too.Weirdcore 2.0The earliest documented examples of weirdcore date back to 2016. Its origins remain unclear but we know that YouTuber DavidCrypt first popularized the term in a now-disappeared video explaining its themes. Weirdcore was framed as a visual and emotional aesthetic that evokes feelings of confusion, nostalgia, and unease through low-quality amateur photography combined with strange phrases and other graphical elements.It recycled the graphic style of early internet visuals from the late 1990s and early 2000s, which were a product of the technology limitations of those times: primitively shaded 2D and 3D graphics typical of software like CorelDraw, badly compressed imagery, GIFs, and lots of terrible typefaces. All of those things were hammered together with blunt tools like Microsoft Paint. The revival aesthetic became popular in places like Reddit and Tumblr, where it still lives today. View this post on Instagram A post shared by Ari Kuschnir (@arikuschnir)Somehow, the combination of out-of-context elements resulted in compositions that live between familiarity and strangeness. The images of early weirdcore provoked an unusually visceral and personal interpretation, which made many people feel weird in response. Some people perceived these images as eerie or unsettling. Others found them nostalgic. A few experienced a sensation of comfort in the surreal presentations.Then with the advent of diffusion artificial intelligence and video creation platforms, artists like Yang took the weirdcore banner and evolved it into new ultrarealistic, sharp-as-vampire-fangs visualizations that you can be enjoyed (or suffered through) on social media. But despite the dramatically different aesthetic, the new weirdcore 2.0 shares the same ultimate objective of triggering a visceral response in the viewer. View this post on Instagram A post shared by Daryl Anselmo (@darylanselmo)Daryl Anselmo is a professional art director for games and new media who has worked at EA, Disney, Zynga, and Improbable Worlds. He tells me via email that he doesnt have a specific goal with his weirdcore videos, but he has always been drawn to the idea of benign violation theory, which he says describes how people cant control their laughter in situations of discomfort, or they laugh as a safety mechanism when their worldview is being threatened. Its a place in which some comedianslike Ricky Gervaisthrive.[The fictional show within a show] Itchy & Scratchy in The Simpsons was benign violation theory perfected, Anselmo points out. I have found that these generative AI tools are kind of perfect for exploring that space and creating that emotional response in the viewer, and social media is probably the best platform to share it on. His videos are purely surreal, and somehow bring me vibes of Chilean filmmaker and artist Alejandro Jodorowsky. View this post on Instagram A post shared by (Insert): dial up sound effect (@junkboxai)For self-described JunkBoxAi artist Mike W, its all about emotional impact, even if the medium is absurd or surreal. I draw inspiration from pop culture, the unpredictability of weirdcore, and the humor of unexpected juxtapositions, he tells me via Instagram messaging. His weirdcore leans harder on celebrities and current news, which is another main avenue for weirdcore 2.0. Whether hes placing a celebrity in a ridiculous alternate reality or turning a mundane concept into something dreamlike and unsettling, he tells me, the primary goal of his art is to entertain, spark curiosity, and connect with people, to be a source of joy and surprisesomething that makes people pause their scrolling, laugh, or wonder how it was made. View this post on Instagram A post shared by Edmond Yang (@edmondyang)Yang says his weirdcore work helps him push boundaries and challenge himself to explore new ways of storytelling. Its about finding humor in unexpected places. I categorize my creations into two types: The first is day-to-day, reactive videos inspired by trends, memes, or hype. These are quick, experimental pieces where I play with current cultural moments, he says. Whenever he sees something happening, he thinks about making it hilariously over-the-top or imagining an alternate outcome. For him, the goal is to surprise people and evoke a reaction. The second type is more intricatevideos with refined concepts that take more time to produce and edit, Yang says. View this post on Instagram A post shared by Edmond Yang (@edmondyang)Other weirdcore creators tend toward absolute horror, like Belgian artist Florian Nackaerts, who focuses on body horror surrealism.How the weirdcore sausage is madeTheir tools and workflows are all similar. Nackaerts started using generative AI in 2021 with the arrival of DALL-E mini and quickly moved through Stable Diffusion, Midjourney, and, finally a combination of video generators. He says he animates with Hailuo (a tool favored by most artists that is developed by a Shanghai-based company) or Kling (another Chinese tool made by Beijing-based Kuaishou), and sometimes Dream Machine (developed by Portland, Oregon-based Luma Labs). View this post on Instagram A post shared by Niceaunties (@niceaunties)Like other video artists, Nackaerts uses different tools depending on the feel he wants to achieve, as each has its own aesthetic, with Hailuo producing the most realistic imagery. Each of these video tools generates very short clips, about five seconds in length, so they need to be edited into a full-length video. Nackaerts creates voice tracks using AI text-to-speech and voice-cloning tool ElevenLabs and the music on Udio. Once generated, his clips and tracks are edited together using a free app on his smartphone. I make all the videos only with my smartphone from the beginning of the process to the end, Nackaerts tells me.The artists I spoke with follow similar workflows. Mike W starts by brainstorming a concept (usually a single absurd or thought-provoking idea, like placing a celebrity in an impossible scenario, he says). From there, he refines the idea into a series of prompts that capture the vibe, composition, and emotional tone he wants. Mike Ws quiver of tools includes ComfyUI (a free tool that helps users create any diffusion AI workflow imaginable), Flux (a text-to-video tool developed by Black Forest Labs in Germany), NYC-based Runway, and the grandaddy of diffusion AIs, MidJourney, which is a still-image generator used to create starting frames to animate in the other tools. The use of these keyframes is crucial in order to maintain continuity between the clips, which are roughly five to eight seconds long. I meticulously craft each frame or animation to ensure consistency and style, he says. View this post on Instagram A post shared by (Insert): dial up sound effect (@junkboxai)Its a similar process for Yang, who focuses on creative experimentation. Most of his videos start with a base image that he sends to a video generator, using image-to-video tools to create the clips he needs to assemble his final piece. Its a numbers game. I often create 20 to 30 variations before landing on the right clip to move forward, he says. Like Nackaerts, he handles everything on his phone, from start to finish, which makes the workflow both efficient and portable. View this post on Instagram A post shared by Ari Kuschnir (@arikuschnir)Anselmo follows a more traditional path that goes from pre-production to production to post. In preproduction he explores an idea with the help of generative AI, seeing what the machine imagines. Sometimes I have a clear picture and I am trying to force an image generator to bend to my will, he says. Other times I only just have a loose concept so I lean into the AI more as a crutch [to] see what kind of journey it wants to take me on. Once he has a group of cohesive still images, he sometimes storyboards them into a rough cut before taking the ones he likes most into a video generator to produce his footage. Much like an analog director, he usually does a few takes per image, generating between five and eight minutes of footage per day that will get cut down into a 30-second reel. View this post on Instagram A post shared by Daryl Anselmo (@darylanselmo)Meanwhile, Im generating a song, trying to find a sound that vibes with the look of the world, Anselmo says. Once he has all the audiovisual resources complete, he brings them back into the editing suite for a more complete edit, doing postproduction tasks such as color grading, sharpening, film grain, and other effects like pans or zoom. This is all done on his computer, where he also runs the footage through an AI image upscaler/enhancer at the very end, right before he uploads it to his phone for posting. It usually takes me a couple of hours per day, he says, noting that his goal now is to speed up the volume of output.: I really just want to get some of these stupid ideas out of my head and onto the next one to see what else I can learn from the process. View this post on Instagram A post shared by Daryl Anselmo (@darylanselmo)Now what I want to know is how I can get all these stupid ideas out of my head without having to flick my thumb up one more time to see more.It may not matter, as this black hole of weird has already reached critical mass. Creators like Yang and the Dor brothers have made the jump from social networks to regular media, sometimes getting into the news cycle, like when the latter turned politicians into bodega robbers. View this post on Instagram A post shared by The Dor Brothers (@thedorbrothers)They have been creating videos for musicians like Snoop Dogg as well as ads that have been playing in Times Square. While these are quite far from the disturbing material they put up on social media, you can see that the times are a-changing. A new generation of video artists are coming from the fringe of weirdcore 2.0 into the mainstream. Perhaps ironically, their uncanny valley art and provocations are doing more than anything else to bring reality crashing down in flames.
0 التعليقات
0 المشاركات
51 مشاهدة