• ASML Sticks to Long-Term Growth Targets Amid AI Frenzy
    www.wsj.com
    ASML said it would maintain its 2030 sales and margin targets, betting that booming demand for artificial intelligence will drive orders for equipment that chip makers need to make increasingly powerful semiconductors.
    0 Comments ·0 Shares ·119 Views
  • The Highest Calling Review: Inside the Oval Office
    www.wsj.com
    Historians rank presidents one way, the public another. A few presidents surprise skeptics and rise to distinction.
    0 Comments ·0 Shares ·118 Views
  • Bad Sisters Season Two: What Happens After Regular Women Plot Murder
    www.wsj.com
    The show about five Irish sisters with a deep bond, co-created by Sharon Horgan, returns Wednesday.
    0 Comments ·0 Shares ·120 Views
  • Trump says Elon Musk will lead DOGE, a new Department of Government Efficiency
    arstechnica.com
    Trump's DOGE man Trump says Elon Musk will lead DOGE, a new Department of Government Efficiency Musk's Department of Government Efficiency to target "massive waste and fraud." Jon Brodkin Nov 13, 2024 3:07 pm | 265 An image posted by Elon Musk after President-elect Donald Trump announced he will lead a new Department of Government Efficiencyor "DOGE." Credit: Elon Musk An image posted by Elon Musk after President-elect Donald Trump announced he will lead a new Department of Government Efficiencyor "DOGE." Credit: Elon Musk Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn morePresident-elect Donald Trump today announced that a new Department of Government Efficiencyor "DOGE"will be led by Elon Musk and former Republican presidential candidate Vivek Ramaswamy. Musk and Ramaswamy, who founded pharma company Roivant Sciences, "will pave the way for my Administration to dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies," according to the Trump statement on Truth Social.DOGE apparently will not be an official federal agency, as Trump said it will provide advice "from outside" of government. But Musk, who has frequently criticized government subsidies despite seeking public money and obtaining various subsidies for his own companies, will apparently have significant influence over spending in the Trump administration. Musk has also had numerous legal disputes with regulators at agencies that regulate his companies."Republican politicians have dreamed about the objectives of 'DOGE' for a very long time," Trump said. "To drive this kind of drastic change, the Department of Government Efficiency will provide advice and guidance from outside of Government, and will partner with the White House and Office of Management & Budget to drive large scale structural reform, and create an entrepreneurial approach to Government never seen before."Muskthe CEO of Tesla and SpaceX, and owner of X (formerly Twitter)was quoted in Trump's announcement as saying that DOGE "will send shockwaves through the system, and anyone involved in Government waste, which is a lot of people!"Trumps perfect gift to AmericaTrump's statement said the department, whose name is a reference to the Doge meme, "will drive out the massive waste and fraud which exists throughout our annual $6.5 Trillion Dollars of Government Spending." Trump said DOGE will "liberate our Economy" and that its "work will conclude no later than July 4, 2026" because "a smaller Government, with more efficiency and less bureaucracy, will be the perfect gift to America on the 250th Anniversary of The Declaration of Independence.""I look forward to Elon and Vivek making changes to the Federal Bureaucracy with an eye on efficiency and, at the same time, making life better for all Americans," Trump said. Today, Musk wrote that the "world is suffering slow strangulation by overregulation," and that "we finally have a mandate to delete the mountain of choking regulations that do not serve the greater good."Musk has been expected to have influence in Trump's second term after campaigning for him. Trump previously vowed to have Musk head a government efficiency commission."That would essentially give the world's richest man and a major government contractor the power to regulate the regulators who hold sway over his companies, amounting to a potentially enormous conflict of interest," said a New York Times article last month.The Wall Street Journal wrote today that "Musk isn't expected to become an official government employee, meaning he likely wouldn't be required to divest from his business empire."Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 265 Comments
    0 Comments ·0 Shares ·149 Views
  • What did the snowball Earth look like?
    arstechnica.com
    Under ice What did the snowball Earth look like? Entire continents, even in the tropics, seems to have been under sheets of ice. John Timmer Nov 13, 2024 12:25 pm | 25 Artist's impression of what a snowball Earth would look like with our continents in their current configuration. Credit: MARK GARLICK/SCIENCE PHOTO LIBRARY Artist's impression of what a snowball Earth would look like with our continents in their current configuration. Credit: MARK GARLICK/SCIENCE PHOTO LIBRARY Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreBy now, it has been firmly established that the Earth went through a series of global glaciations around 600 million to 700 million years ago, shortly before complex animal life exploded in the Cambrian. Climate models have confirmed that, once enough of a dark ocean is covered by reflective ice, it sets off a cooling feedback that turns the entire planet into an icehouse. And we've found glacial material that was deposited off the coasts in the tropics.We have an extremely incomplete picture of what these snowball periods looked like, and Antarctic terrain provides different models for what an icehouse continent might look like. But now, researchers have found deposits that they argue were formed beneath a massive ice sheet that was being melted from below by volcanic activity. And, although the deposits are currently in Colorado's Front Range, at the time they resided much closer to the equator.In the icehouseGlacial deposits can be difficult to identify in deep time. Massive sheets of ice will scour the terrain down to bare rock, leaving behind loosely consolidated bits of rubble that can easily be swept away after the ice is gone. We can spot when that rubble shows up in ocean deposits to confirm there were glaciers along the coast, but rubble can be difficult to find on land.That's made studying the snowball Earth periods a challenge. We've got the offshore deposits to confirm coastal ice, and we've got climate models that say the continents should be covered in massive ice sheets, but we've got very little direct evidence. Antarctica gives off mixed messages, too. While there are clearly massive ice sheets, there are also dry valleys, where there's barely any precipitation and there's so little moisture in the air that any ice that makes its way into the valleys sublimates away into water vapor.All of which raises questions about what the snowball Earth might have looked like in the continental interiors. A team of US-based geologists think they've found some glacial deposits in the form of what are called the Tavakaiv sandstones in Colorado. These sandstones are found along the Front Range of the Rockies, including areas just west of Colorado Springs. And, if the authors' interpretations are correct, they formed underneath a massive sheet of glacial ice.There are lots of ways to form sandstone deposits, and they can be difficult to date because they're aggregates of the remains of much older rocks. But in this case, the Tavakaiv sandstone is interrupted by intrusions of dark colored rock that contains quartz and large amounts of hematite, a form of iron oxide.These intrusions tell us a remarkable number of things. For one, some process must have exerted enough force to drive material into small faults in the sandstone. Hematite only gets deposited under fairly specific conditions, which tells us a bit more. And, most critically, hematite can trap uranium and the lead it decays into, providing a way of dating when the deposits formed.Under the snowballDepending on which site was being sampled, the hematite produced a range of dates, from as recent as 660 million years ago to as old as 700 million years. That means all of them were formed during what's termed the Sturtian glaciation, which ran from 715 million to 660 million years ago. At the time, the core of what is now North America was in the equatorial region. So, the Tavakaiv sandstones can provide a window into what at least one continent experienced during the most severe global glaciation of the Cryogenian Period.Obviously, a sandstone could be formed from the fine powder that glaciers grind off rock as they flow. The authors argue that the intrusions that led to the hematite are the product of the massive pressure of the ice sheet acting on some liquid water at its base. That, they argue, would be enough to force the water into minor cracks in the deposit, producing the vertical bands of material that interrupt the sandstone.There are plenty of ways for there to be liquid water at the base of the ice sheet, including local heating due to friction, the draining of surface melts to the base of the glacier (we're seeing a lot of the latter in Greenland at present), or simply hitting the right combination of pressure and temperature. But hematite deposits are typically formed at elevated temperatures (in the area of 220 C), which isn't consistent with either of these processes.Instead, the researchers argue that the hematite comes from geothermal fluids. There are signs of volcanic activity in Idaho that dates from this same period, and the researchers suggest that there may have been sporadic volcanism in Colorado related to this. This would create fluids warm enough to carry the iron oxides that ended up deposited as hematite in these sandstones.While this provides some evidence that at least one part of the continental interior was covered in ice during the snowball Earth period, that doesn't necessarily apply to all areas of all continents. As Antarctica indicates, dry valleys and massive ice sheets can coexist in close proximity when the conditions are right. But the discovery does provide a window into a key period in the Earth's history that has otherwise been quite difficult to study.PNAS, 2024. DOI: 10.1073/pnas.2410759121 (About DOIs).John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 25 Comments Prev story
    0 Comments ·0 Shares ·153 Views
  • 12,000-year-old stones may be oldest example of wheel-like tools
    www.newscientist.com
    A perforated pebble from the Nahal Ein Gev II archaeological site, which may be an ancient spindle whorlLaurent DavinA set of 12,000-year-old pierced pebbles excavated in northern Israel may be the oldest known hand-spinning whorls a textile technology that may have ultimately helped inspire the invention of the wheel.Serving as a flywheel at the bottom of a spindle, whorls allowed people to efficiently spin natural fibres into yarns and thread to create clothing and other textiles. The newly discovered stone tools represent early axle-based rotation technology thousands of years before the first carts, says Talia Yashuv at the Hebrew University of Jerusalem. AdvertisementWhen you look back to find the first vehicle wheels 6000 years ago, its not like it just came out of nowhere, she says. Its important to look at the functional evolution of how transportation and the wheel evolved.Yashuv and her colleague Leore Grosman, also at the Hebrew University of Jerusalem, studied 113 partially or fully perforated stones at the Nahal Ein Gev II site, an ancient village just east of the Sea of Galilee. Archaeologists have been uncovering these chalky, predominantly limestone artefacts probably made from raw pebbles along the nearby seashore since 1972.3D scanning revealed that the holes had been drilled halfway through from each side using a flint hand drill, which unlike modern drills leaves a narrow and twisting cone-like shape, says Yashuv. Measuring 3 to 4 centimetres in diameter, the holes generally ran through the pebbles centre of gravity. Keep up with advances in archaeology and evolution with our monthly newsletter.Sign up to newsletterDrilling from both sides would have helped balance the stone for more stable spinning, says Yashuv. Several of the partially perforated stones had holes that were off-centre, suggesting they might have been errors and thrown out.The team suspected that the stones, weighing 9 grams on average, were too heavy and ugly to have been beads and too light and fragile to be used as fishing weights, says Yashuv. Their size, shape and balance around the holes convinced the researchers that the artefacts were spindle whorls.To test their hypothesis, the researchers created replicat whorls using nearby pebbles and a flint drill. Then they asked Yonit Kristal, a traditional craftsperson, to try spinning flax with them.She was really surprised that they worked, because they werent perfectly round, says Yashuv. But really you just need the perforation to be located at the centre of mass, and then its balanced and it works.If the stones are indeed whorls, that could make them the oldest known spinning whorls, she says. A 1991 study on bone and antler artefacts uncovered what may be 20,000-year-old whorls, she adds, but the researchers who examined them suggested the pieces were probably decorative clothing accents. Even so, it is possible that people were using whorls even earlier, using wood or other biological materials that would have since deteriorated.The finding suggests that people were experimenting with rotation technology thousands of years before inventing the pottery wheel and the cart wheel about 5500 years ago and that the whorls probably helped lead to those inventions, says Yashuv.Carole Cheval at Cte dAzur University in Nice, France, is less convinced, however. Whorls work more like a top than a wheel, she explains.And while the artefacts might very well be whorls, the study lacks microscopic data that would reveal traces of use as yarns would have marked the stones over time, Cheval says.Trace analysis was beyond the scope of the current study, says Yashuv.Ideally, researchers studying ancient whorls would be skilled in spinning themselves which the study authors were not, says Cheval. It really changes the way you think about your archaeological finds, she says.Journal reference:PLOS One DOI: 10.1371/journal.pone.0312007 Topics:archaeology
    0 Comments ·0 Shares ·151 Views
  • We must use genetic technologies now to avert the coming food crisis
    www.newscientist.com
    Leader and EnvironmentFood production is responsible for more than a third of greenhouse gas emissions. To get everyone the food they need in a warming world, governments worldwide must invest in securing our food systems 13 November 2024 Shutterstock/KzenonThere are two monumental problems with the worlds food system. Firstly, hundreds of millions of people cant afford to buy enough nutritious food to stay healthy. Secondly, it is incredibly destructive. We are still razing rainforests to make way for ranches, and both conventional and organic farms produce all kinds of pollutants, with food systems generating more than a third of greenhouse gases.As the world soars past a 1.5C rise in temperature (see 2024 is set to be the first year that breaches the 1.5C warming limit), things could get much worse. But there is plenty we can do, from eating less meat to reducing food waste (see Is the climate change food crisis even worse than we imagined?). With the amazing advances in genetic technologies in recent years, there is also huge scope to improve the plants and animals that provide our food. We can make them more nutritious, healthier, better able to cope with changing conditions and less susceptible to diseases that are thriving as the world warms. We should also be able to create plants that need less fertiliser and capture more of the suns energy.It is astounding that most countries arent investing heavily in improving cropsAdvertisementThe benefits from all this would be enormous: more food from less land, lower prices, reduced greenhouse gas emissions and less chance of viruses such as H5N1 bird flu causing another pandemic.So it is astounding that most countries arent investing heavily in improving crops. There is some private investment, but those companies are unlikely to make their technologies freely available, slowing their adoption.We are also restricted by the notion that more natural means of farming are better, with opposition to genetically modified (GM) crops making it difficult and expensive to get them approved.This is starting to change, with many countries making it easier for gene-edited crops and animals to get to market, but we need more action and fast.The idea that organic food is better for the planet and GM foods are worse for it is a false narrative that hides a much more unpalatable reality: that continuing as we are will lead to even more destruction and increased hunger.Topics:
    0 Comments ·0 Shares ·140 Views
  • The AI lab waging a guerrilla war over exploitative AI
    www.technologyreview.com
    Ben Zhao remembers well the moment he officially jumped into the fight between artists and generative AI: when one artist asked for AI bananas. A computer security researcher at the University of Chicago, Zhao had made a name for himself by building tools to protect images from facial recognition technology. It was this work that caught the attention of Kim Van Deun, a fantasy illustrator who invited him to a Zoom call in November 2022 hosted by the Concept Art Association, an advocacy organization for artists working in commercial media. On the call, artists shared details of how they had been hurt by the generative AI boom, which was then brand new. At that moment, AI was suddenly everywhere. The tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAIs DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados. But these artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work. Some had found that their art had been scraped off the internet and used to train the models, while others had discovered that their own names had become prompts, causing their work to be drowned out online by AI knockoffs. Zhao remembers being shocked by what he heard. People are literally telling you theyre losing their livelihoods, he told me one afternoon this spring, sitting in his Chicago living room. Thats something that you just cant ignore. So on the Zoom, he made a proposal: What if, hypothetically, it was possible to build a mechanism that would help mask their art to interfere with AI scraping? I would love a tool that if someone wrote my name and made a prompt, like, garbage came out, responded Karla Ortiz, a prominent digital artist. Just, like, bananas or some weird stuff. That was all the convincing Zhao neededthe moment he joined the cause. Fast-forward to today, and millions of artists have deployed two tools born from that Zoom: Glaze and Nightshade, which were developed by Zhao and the University of Chicagos SAND Lab (an acronym for security, algorithms, networking, and data). Arguably the most prominent weapons in an artists arsenal against nonconsensual AI scraping, Glaze and Nightshade work in similar ways: by adding what the researchers call barely perceptible perturbations to an images pixels so that machine-learning models cannot read them properly. Glaze, which has been downloaded more than 6 million times since it launched in March 2023, adds whats effectively a secret cloak to images that prevents AI algorithms from picking up on and copying an artists style. Nightshade, which I wrote about when it was released almost exactly a year ago this fall, cranks up the offensive against AI companies by adding an invisible layer of poison to images, which can break AI models; it has been downloaded more than 1.6 million times. Thanks to the tools, Im able to post my work online, Ortiz says, and thats pretty huge. For artists like her, being seen online is crucial to getting more work. If they are uncomfortable about ending up in a massive for-profit AI model without compensation, the only option is to delete their work from the internet. That would mean career suicide. Its really dire for us, adds Ortiz, who has become one of the most vocal advocates for fellow artists and is part of a class action lawsuit against AI companies, including Stability AI, over copyright infringement. But Zhao hopes that the tools will do more than empower individual artists. Glaze and Nightshade are part of what he sees as a battle to slowly tilt the balance of power from large corporations back to individual creators. It is just incredibly frustrating to see human life be valued so little, he says with a disdain that Ive come to see as pretty typical for him, particularly when hes talking about Big Tech. And to see that repeated over and over, this prioritization of profit over humanity it is just incredibly frustrating and maddening. As the tools are adopted more widely, his lofty goal is being put to the test. Can Glaze and Nightshade make genuine security accessible for creatorsor will they inadvertently lull artists into believing their work is safe, even as the tools themselves become targets for haters and hackers? While experts largely agree that the approach is effective and Nightshade could prove to be powerful poison, other researchers claim theyve already poked holes in the protections offered by Glaze and that trusting these tools is risky. But Neil Turkewitz, a copyright lawyer who used to work at the Poking the bear The SAND Lab is tight knit, encompassing a dozen or so researchers crammed into a corner of the University of Chicagos computer science building. That space has accumulated somewhat typical workplace detritusa Meta Quest headset here, silly photos of dress-up from Halloween parties there. But the walls are also covered in original art pieces, including a framed painting by Ortiz. Years before fighting alongside artists like Ortiz against AI bros (to use Zhaos words), Zhao and the labs co-leader, Heather Zheng, who is also his wife, had built a record of combating harms posed by new tech. When I visited the SAND Lab in Chicago, I saw how tight knit the group was. Alongside the typical workplace stuff were funny Halloween photos like this one. (Front row: Ronik Bhaskar, Josephine Passananti, Anna YJ Ha, Zhuolin Yang, Ben Zhao, Heather Zheng. Back row: Cathy Yuanchen Li, Wenxin Ding, Stanley Wu, and Shawn Shan.)COURTESY OF SAND LAB Though both earned spots on MIT Technology Reviews 35 Innovators Under 35 list for other work nearly two decades ago, when they were at the University of California, Santa Barbara (Zheng in 2005 for cognitive radios and Zhao a year later for peer-to-peer networks), their primary research focus has become security and privacy. The pair left Santa Barbara in 2017, after they were poached by the new co-director of the University of Chicagos Data Science Institute, Michael Franklin. All eight PhD students from their UC Santa Barbara lab decided to follow them to Chicago too. Since then, the group has developed a bracelet of silence that jams the microphones in AI voice assistants like the Amazon Echo. It has also created a tool called Fawkesprivacy armor, as Zhao put it in a 2020 interview with the New York Timesthat people can apply to their photos to protect them from facial recognition software. Theyve also studied how hackers might steal sensitive information through stealth attacks on virtual-reality headsets, and how to distinguish human art from AI-generated images. Ben and Heather and their group are kind of unique because theyre actually trying to build technology that hits right at some key questions about AI and how it is used, Franklin tells me. Theyre doing it not just by asking those questions, but by actually building technology that forces those questions to the forefront. It was Fawkes that intrigued Van Deun, the fantasy illustrator, two years ago; she hoped something similar might work as protection against generative AI, which is why she extended that fateful invite to the Concept Art Associations Zoom call. That call started something of a mad rush in the weeks that followed. Though Zhao and Zheng collaborate on all the labs projects, they each lead individual initiatives; Zhao took on what would become Glaze, with PhD student Shawn Shan (who was on this years Innovators Under 35 list) spearheading the development of the programs algorithm. In parallel to Shans coding, PhD students Jenna Cryan and Emily Wenger sought to learn more about the views and needs of the artists themselves. They created a user survey that the team distributed to artists with the help of Ortiz. In replies from more than 1,200 artistsfar more than the average number of responses to user studies in computer sciencethe team found that the vast majority of creators had read about art being used to train models, and 97% expected AI to decrease some artists job security. A quarter said AI art had already affected their jobs. Almost all artists also said they posted their work online, and more than half said they anticipated reducing or removing that online work, if they hadnt alreadyno matter the professional and financial consequences. The first scrappy version of Glaze was developed in just a month, at which point Ortiz gave the team her entire catalogue of work to test the model on. At the most basic level, Glaze acts as a defensive shield. Its algorithm identifies features from the image that make up an artist's individual style and adds subtle changes to them. When an AI model is trained on images protected with Glaze, the model will not be able to reproduce styles similar to the original image. A painting from Ortiz later became the first image publicly released with Glaze on it: a young woman, surrounded by flying eagles, holding up a wreath. Its title is Musa Victoriosa, victorious muse. Its the one currently hanging on the SAND Labs walls. Despite many artists initial enthusiasm, Zhao says, Glazes launch caused significant backlash. Some artists were skeptical because they were worried this was a scam or yet another data-harvesting campaign. The lab had to take several steps to build trust, such as offering the option to download the Glaze app so that it adds the protective layer offline, which meant no data was being transferred anywhere. (The images are then shielded when artists upload them.) Soon after Glazes launch, Shan also led the development of the second tool, Nightshade. Where Glaze is a defensive mechanism, Nightshade was designed to act as an offensive deterrent to nonconsensual training. It works by changing the pixels of images in ways that are not noticeable to the human eye but manipulate machine-learning models so they interpret the image as something different from what it actually shows. If poisoned samples are scraped into AI training sets, these samples trick the AI models: Dogs become cats, handbags become toasters. The researchers say only a relatively few examples are enough to permanently damage the way a generative AI model produces images. Currently, both tools are available as free apps or can be applied through the projects website. The lab has also recently expanded its reach by offering integration with the new artist-supported social network Cara, which was born out of a backlash to exploitative AI training and forbids AI-produced content. In dozens of conversations with Zhao and the labs researchers, as well as a handful of their artist-collaborators, its become clear that both groups now feel they are aligned in one mission. I never expected to become friends with scientists in Chicago, says Eva Toorenent, a Dutch artist who worked closely with the team on Nightshade. Im just so happy to have met these people during this collective battle. Images online of Toorenent's Belladonna have been treated with the SAND Lab's Nightshade tool.EVA TOORENENT Her painting Belladonna, which is also another name for the nightshade plant, was the first image with Nightshades poison on it. Its so symbolic, she says. People taking our work without our consent, and then taking our work without consent can ruin their models. Its just poetic justice. No perfect solution The reception of the SAND Labs work has been less harmonious across the AI community. After Glaze was made available to the public, Zhao tells me, someone reported it to sites like VirusTotal, which tracks malware, so that it was flagged by antivirus programs. Several people also started claiming on social media that the tool had quickly been broken. Nightshade similarly got a fair share of criticism when it launched; as TechCrunch reported in January, some called it a virus and, as the story explains, another Reddit user who inadvertently went viral on X questioned Nightshades legality, comparing it to hacking a vulnerable computer system to disrupt its operation. We had no idea what we were up against, Zhao tells me. Not knowing who or what the other side could be meant that every single new buzzing of the phone meant that maybe someone did break Glaze. Both tools, though, have gone through rigorous academic peer review and have won recognition from the computer security community. Nightshade was accepted at the IEEE Symposium on Security and Privacy, and Glaze received a distinguished paper award and the 2023 Internet Defense Prize at the Usenix Security Symposium, a top conference in the field. In my experience working with poison, I think [Nightshade is] pretty effective, says Nathalie Baracaldo, who leads the AI security and privacy solutions team at IBM and has studied data poisoning. I have not seen anything yetand the word yet is important herethat breaks that type of defense that Ben is proposing. And the fact that the team has released the source code for Nightshade for others to probe, and it hasnt been broken, also suggests its quite secure, she adds. At the same time, at least one team of researchers does claim to have penetrated the protections of Glaze, or at least an old version of it. As researchers from Google DeepMind and ETH Zurich detailed in a paper published in June, they found various ways Glaze (as well as similar but less popular protection tools, such as Mist and Anti-DreamBooth) could be circumvented using off-the-shelf techniques that anyone could accesssuch as image upscaling, meaning filling in pixels to increase the resolution of an image as its enlarged. The researchers write that their work shows the brittleness of existing protections and warn that artists may believe they are effective. But our experiments show they are not. Florian Tramr, an associate professor at ETH Zurich who was part of the study, acknowledges that it is very hard to come up with a strong technical solution that ends up really making a difference here. Rather than any individual tool, he ultimately advocates for an almost certainly unrealistic ideal: stronger policies and laws to help create an environment in which people commit to buying only human-created art. What happened here is common in security research, notes Baracaldo: A defense is proposed, an adversary breaks it, andideallythe defender learns from the adversary and makes the defense better. Its important to have both ethical attackers and defenders working together to make our AI systems safer, she says, adding that ideally, all defenses should be publicly available for scrutiny, which would both allow for transparency and help avoid creating a false sense of security. (Zhao, though, tells me the researchers have no intention to release Glazes source code.) Still, even as all these researchers claim to support artists and their art, such tests hit a nerve for Zhao. In Discord chats that were later leaked, he claimed that one of the researchers from the ETH ZurichGoogle DeepMind team doesnt give a shit about people. (That researcher did not respond to a request for comment, but in a blog post he said it was important to break defenses in order to know how to fix them. Zhao says his words were taken out of context.) Zhao also emphasizes to me that the papers authors mainly evaluated an earlier version of Glaze; he says its new update is more resistant to tampering. Messing with images that have current Glaze protections would harm the very style that is being copied, he says, making such an attack useless. This back-and-forth reflects a significant tension in the computer security community and, more broadly, the often adversarial relationship between different groups in AI. Is it wrong to give people the feeling of security when the protections youve offered might break? Or is it better to have some level of protectionone that raises the threshold for an attacker to inflict harmthan nothing at all? Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, says there are plenty of examples where similar technical protections have failed to be bulletproof. For example, in 2023, de Montjoye and his team probed a digital mask for facial recognition algorithms, which was meant to protect the privacy of medical patients facial images; they were able to break the protections by tweaking just one thing in the programs algorithm (which was open source). Using such defenses is still sending a message, he says, and adding some friction to data profiling. Tools such as TrackMeNotwhich protects users from data profilinghave been presented as a way to protest; as a way to say I do not consent. But at the same time, he argues, we need to be very clear with artists that it is removable and might not protect against future algorithms. While Zhao will admit that the researchers pointed out some of Glazes weak spots, he unsurprisingly remains confident that Glaze and Nightshade are worth deploying, given that security tools are never perfect. Indeed, as Baracaldo points out, the Google DeepMind and ETH Zurich researchers showed how a highly motivated and sophisticated adversary will almost certainly always find a way in. Yet it is simplistic to think that if you have a real security problem in the wild and youre trying to design a protection tool, the answer should be it either works perfectly or dont deploy it, Zhao says, citing spam filters and firewalls as examples. Defense is a constant cat-and-mouse game. And he believes most artists are savvy enough to understand the risk. Offering hope The fight between creators and AI companies is fierce. The current paradigm in AI is to build bigger and bigger models, and there is, at least currently, no getting around the fact that they require vast data sets hoovered from the internet to train on. Tech companies argue that anything on the public internet is fair game, and that it is impossible to build advanced AI tools without copyrighted material; many artists argue that tech companies have stolen their intellectual property So far, the creatives arent exactly winning. A number of companies have already replaced designers, copywriters, and illustrators with AI systems. In one high-profile case, Marvel Studios used AI-generated imagery instead of human-created art in the title sequence of its 2023 TV series Secret Invasion. In another, a radio station fired its human presenters and replaced them with AI. The technology has become a major bone of contention between unions and film, TV, and creative studios, most recently leading to a strike by video-game performers. There are numerous ongoing lawsuits by artists, writers, publishers, and record labels against AI companies. It will likely take years until there is a clear-cut legal resolution. But even a court ruling wont necessarily untangle the difficult ethical questions created by generative AI. Thats why Zhao and Zheng see Glaze and Nightshade as necessary interventionstools to defend original work, attack those who would help themselves to it, and, at the very least, buy artists some time. Having a perfect solution is not really the point. The researchers need to offer something now because the AI sector moves at breakneck speed, Zheng says, means that companies are ignoring very real harms to humans. This is probably the first time in our entire technology careers that we actually see this much conflict, she adds. On a much grander scale, she and Zhao tell me they hope that Glaze and Nightshade will eventually have the power to overhaul how AI companies use art and how their products produce it. It is eye-wateringly expensive to train AI models, and its extremely laborious for engineers to find and purge poisoned samples in a data set of billions of images. Theoretically, if there are enough Nightshaded images on the internet and tech companies see their models breaking as a result, it could push developers to the negotiating table to bargain over licensing and fair compensation. Thats, of course, still a big if. MIT Technology Review reached out to several AI companies, such as Midjourney and Stability AI, which did not reply to requests for comment. A spokesperson for OpenAI, meanwhile, did not confirm any details about encountering data poison but said the company takes the safety of its products seriously and is continually improving its safety measures: We are always working on how we can make our systems more robust against this type of abuse. In the meantime, the SAND Lab is moving ahead and looking into funding from foundations and nonprofits to keep the project going. They also say there has also been interest from major companies looking to protect their intellectual property (though they decline to say which), and Zhao and Zheng are exploring how the tools could be applied in other industries, such as gaming, videos, or music. In the meantime, they plan to keep updating Glaze and Nightshade to be as robust as possible, working closely with the students in the Chicago labwhere, on another wall, hangs Toorenents Belladonna. The painting has a heart-shaped note stuck to the bottom right corner: Thank you! You have given hope to us artists. This story has been updated with the latest download figures for Glaze and Nightshade.
    0 Comments ·0 Shares ·168 Views
  • Generative AI taught a robot dog to scramble around a new environment
    www.technologyreview.com
    Teaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but thats scarce and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when theyre pulled out of virtual worlds and asked to do the same tasks in the real one. Now theres a potentially better option: a new system that uses generative AI models Researchers used the system, called LucidSim, to train a robot dog in parkour, getting it to scramble over a box and climb stairs even though it had never seen any real-world data. The approach demonstrates how helpful generative AI could be when it comes to teaching robots to do challenging tasks. It also raises the possibility that we could ultimately train them in entirely virtual worlds. The research was presented at the Conference on Robot Learning (CoRL) last week. Were in the middle of an industrial revolution for robotics, says Ge Yang, a postdoc at MITs Computer Science and Artificial Intelligence Laboratory, who worked on the project. This is our attempt at understanding the impact of these [generative AI] models outside of their original intended purposes, with the hope that it will lead us to the next generation of tools and models. LucidSim uses a combination of generative AI models to create the visual training data. First the researchers generated thousands of prompts for ChatGPT, getting it to create descriptions of a range of environments that represent the conditions the robot would encounter in the real world, including different types of weather, times of day, and lighting conditions. These included an ancient alley lined with tea houses and small, quaint shops, each displaying traditional ornaments and calligraphy and the sun illuminates a somewhat unkempt lawn dotted with dry patches. These descriptions were fed into a system that maps 3D geometry and physics data onto AI-generated images, creating short videos mapping a trajectory for the robot to follow. The robot draws on this information to work out the height, width, and depth of the things it has to navigatea box or a set of stairs, for example. The researchers tested LucidSim by instructing a four-legged robot equipped with a webcam to complete several tasks, including locating a traffic cone or soccer ball, climbing over a box, and walking up and down stairs. The robot performed consistently better than when it ran a system trained on traditional simulations. In 20 trials to locate the cone, LucidSim had a 100% success rate, versus 70% for systems trained on standard simulations. Similarly, LucidSim reached the soccer ball in another 20 trials 85% of the time, and just 35% for the other system. Finally, when the robot was running LucidSim, it successfully completed all 10 stair-climbing trials, compared with just 50% for the other system. From left: Phillip Isola, Ge Yang, and Alan YuCOURTESY OF MIT CSAIL These results are likely to improve even further in the future if LucidSim draws directly from sophisticated generative video models rather than a rigged-together combination of language, image, and physics models, says Phillip Isola, an associate professor at MIT who worked on the research. The researchers approach to using generative AI is a novel one that will pave the way for more interesting new research, says Mahi Shafiullah, a PhD student at New York University who is using AI models to train robots. He did not work on the project. The more interesting direction I see personally is a mix of both real and realistic imagined data that can help our current data-hungry methods scale quicker and better, he says. The ability to train a robot from scratch purely on AI-generated situations and scenarios is a significant achievement and could extend beyond machines to more generalized AI agents, says Zafeirios Fountas, a senior research scientist at Huawei specializing in braininspired AI. The term robots here is used very generally; were talking about some sort of AI that interacts with the real world, he says. I can imagine this being used to control any sort of visual information, from robots and self-driving cars up to controlling your computer screen or smartphone. In terms of next steps, the authors are interested in trying to train a humanoid robot using wholly synthetic datawhich they acknowledge is an ambitious goal, as bipedal robots are typically less stable than their four-legged counterparts. Theyre also turning their attention to another new challenge: using LucidSim to train the kinds of robotic arms that work in factories and kitchens. The tasks they have to perform require a lot more dexterity and physical understanding than running around a landscape. To actually pick up a cup of coffee and pour it is a very hard, open problem, says Isola. If we could take a simulation that's been augmented with generative AI to create a lot of diversity and train a very robust agent that can operate in a caf, I think that would be very cool.
    0 Comments ·0 Shares ·161 Views
  • Geometry Nodes-based road generator in Blender

    Here's a demo of a Blender-made WIP road generator developed by 3D Generalist, Automotive Artist, and Animator Ethan Davis. Using the power of Geometry Nodes, the tool lets its user adjust the curvature of the road, automatically add guardrails, and subtly alter the landscape in real time to match the curve.


    #blender3d #b3d #blender #gamedev #indiedev #3dart #3dmodeling #gamedev #indiedev #proceduralart
    Geometry Nodes-based road generator in Blender🚦 Here's a demo of a Blender-made WIP road generator developed by 3D Generalist, Automotive Artist, and Animator Ethan Davis. Using the power of Geometry Nodes, the tool lets its user adjust the curvature of the road, automatically add guardrails, and subtly alter the landscape in real time to match the curve. #blender3d #b3d #blender #gamedev #indiedev #3dart #3dmodeling #gamedev #indiedev #proceduralart
    0 Comments ·0 Shares ·4K Views ·2