0 Reacties
0 aandelen
52 Views
Bedrijvengids
Bedrijvengids
-
Please log in to like, share and comment!
-
WWW.WIRED.COMSephora Promo Code: 10% Off In November 2024Plan your next trip with Hotels.com and score up to 30% off with member-exclusive deals.0 Reacties 0 aandelen 48 Views
-
WWW.NYTIMES.COMF.B.I. Searches Home of Shayne Coplan, Polymarket FounderThe search involving Shayne Coplan, the founder of Polymarket, known for its presidential election odds, was part of a criminal investigation, three people said.0 Reacties 0 aandelen 102 Views
-
WWW.NYTIMES.COMCrypto Industry Lobbies Trump and His Allies After Election WinsAs Bitcoin soars to record highs, cryptocurrency executives are maneuvering to influence Donald J. Trumps transition and secure their policy goals.0 Reacties 0 aandelen 104 Views
-
WWW.MACWORLD.COMCarrier gaffe appears to reveal crucial iOS 18.2 launch dateMacworldThe all-important iOS 18.2 software update for iPhone is set to launch on December 9, based on an apparent gaffe by a U.K. carrier.Officially, Apple has announced only that iOS 18.2 will roll out in the month of December. But an additional clue was provided earlier this month when EE sent a notice to customers that its shared number service would no longer be available on MacBooks and iPads from December 9. As MacRumors notes, thats a change expected to take effect with the launch of iOS 18.2, because the second developer beta of the update contains a new EE carrier settings version which removes the toggle related to that feature.This in itself might not seem entirely conclusive, but the date makes sense in other respects too. Its a Monday, which is when iOS updates are often released, and it gets the update out ahead of the holiday season. And the previous Monday is Cyber Monday, which is unlikely to feature such a major release. So, unless Apple reacts to the leak by altering its plans, the chances are that this is the correct date.And the launch of iOS 18.2 is a reasonably big deal. One of the criticisms of Apples fall 2024 launch cycle has been that important features werent available when the new iPhones went on sale. Apple Intelligence didnt appear at all until iOS 18.1, while many of its features have been held back until iOS 18.2. The update is expected to add Visual Intelligence, Genmoji and other AI-based image generation, ChatGPT integration, support for localized English in the U.K., Canada, and Australia, and much more.Read our guide to the Apple Intelligence rollout for more details of the imminent new features.0 Reacties 0 aandelen 45 Views
-
WWW.MACWORLD.COMNeed a new iPhone? These are the best Black Friday deals youre going to findMacworldBlack Friday is the biggest shopping event of the year, and its often the best time to get a good deal on a new iPhone, accessories, or other Apple gear.While Black Friday is really just the day after the U.S. Thanksgiving holiday, it has grown into an entire season unto itself, with sales extending several days after and well into December. Officially, Black Friday is Friday, November 29, while Cyber Monday is Monday, December 2. But youll find deals that start earlier and end later.We check prices on iPhone models all year round, so at any point you can check out our best iPhone deals article for up-to-date info on the top deals. If you want one of the latest iPhone 16 models, we have separate advice for readers in the U.S. and the U.K., and dont forget our round-up of the best Apple deals that we also update all year round.That said, Black Friday is generally expected to bring the best deals of the year. You can find early deals below as well as some advice on what will be on sale based on what we saw last year. Well be updating this page imminently, as the discounts start to come in and as Apple announces its own Black Friday 2024 sale.Black Friday 2024: Apples shopping eventEvery year Apple holds a shopping event from Black Friday (November 29) to Cyber Monday (December 2). However, since Apple rarely discounts its products, the event consists of gift card offers rather than actual savings.In 2023 you could get gift cards for the following amounts with the following iPhones purchased from Apple.com. This year, the deals will likely apply to the iPhone 14 and iPhone 15, but not the iPhone 16.iPhone 14$7560iPhone 13$5040iPhone deals: What to getApples newest iPhones are the iPhone 16 ($799), 16 Plus ($899), 16 Pro ($999), and 16 Max ($1,199). The iPhone 14 ($599) and 14 Plus ($699), and 15 ($699) and 15 Plus ($799) are also still offered for sale, as well as the iPhone SE ($429). You can save some money by buying an older iPhone, but youll be giving up some newer features, most notably Apple Intelligence. We dont recommend buying an iPhone SE even if its free through a carrier as its several years old and is due to get an update in early 2025.iPhone deals from U.S. carriers for Black Friday 2024These are the deals that are available from the three major U.S. carriers. Conditions always apply hereyou may need to have a specific plan, open a new line, or trade in a device. Most U.S. carrier deals require you to buy the phone on an installment plan, and then you get the discount as bill credits toward your monthly payment.In the U.S., the only big iPhone deals you can find come from carriers, and have a lot of strings attached. Typically, you have to buy a phone on a multi-year installment plan and get your discount in the form of monthly bill credits.AT&T: Get up to $1,000 in bill credits when you buy a new iPhone 16 Pro or Pro Max and trade in your old phone, with a qualifying unlimited plan. T-Mobile: Get up to $830 in bill credits toward the purchase of any iPhone 16 model when you trade in your old phone and join the Go5G plan.Verizon: Get up to $100 in bill credits toward any iPhone 16 Pro model with a trade-in and the activation of a new line on the Ultimate Unlimited plan.Amazon: Amazon has partnered with Boost Infinite, a new carrier owned by Dish that primarily uses T-Mobile and AT&T towers for now. $65/mo gets you a free iPhone 16 Pro and unlimited talk, text, and data. Restrictions apply.iPhone deals in the U.K.This is our pick of the best U.K. iPhone deals in November 2024. For the latest deals on specific models, check the automated price comparison tables below.John Lewis,iPhone 15 (128GB): 681 (18 off, RRP 699) plus two-year guaranteeJohn Lewis,iPhone 15 Plus (128GB): 793.61 (5.39 off, RRP 799)Argos,iPhone 15 Pro (128GB): 899 (RRP was 999, DISCONTINUED)Amazon,iPhone 15 Pro Max (256GB): 1,099 (RRP was 1,199, DISCONTINUED)Amazon,iPhone 14: 499 (100 off, RRP 599)KRCS,iPhone SE (64GB): 375.21 (53.79 off, RRP 429)KRCS,iPhone SE (128GB): 424.71 (54.29 off, RRP 479)Amazon,iPhone 13 (128GB): 449 (RRP was 499, DISCONTINUED)Argos,iPhone 13 mini (128GB): 499 (150 off, RRP was 649, DISCONTINUED)iPhone accessories are on sale this Black FridayMany of the best deals are going to be on iPhone accessoriescases, chargers, cables, and the like. There are way too many such products to keep track of, and many small discounts come and go throughout the Black Friday sale. We highlight only the most interesting bargains; products that are rarely discounted or exceptionally good prices on accessories we recommend.U.S.Amazon, MagSafe charger (2M): $34 ($15 off, MSRP $49)Amazon, AirTag: $19 ($10 off, MSRP $29)Amazon, AirTag (4-pack): $69 ($30 off, MSRP $99)Amazon, Belkin BoostCharge Wireless Power Bank: $49 ($10 off)Amazon, UGreen 2-in-1 magnetic charging station: $29 ($10 off)U.K.Amazon U.K., Apple MagSafe Charger: 39 (6 off)Amazon U.K., AirTag 4-pack: 95 (24 off)Currys, single AirTag for 28.99 (6.01 off)You can also find more Apple accessory Black Friday deals.iPhone 15 dealsMSRP: $799/799NewRefurbishedRetailerPrice807View Deal819View Deal849View Deal749,00 View Deal773,00 View Deal777,00 View Deal777,00 View Deal793,41 View Deal839.83View Deal846,99 View Deal849,00 View Deal887,00 View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPhone 15 128GB Schwarz Ohne Vertrag628.20View DealPrice comparison from BackmarketiPhone 15 Plus dealsMSRP: $899/899NewRefurbishedRetailerPrice929View Deal929View Deal911.2View Deal869,00 View Deal939,99 View Deal939,99 View Deal967,09 View Deal1060.23View Deal1.119,00 View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPhone 15 Plus 128GB Schwarz Ohne Vertrag785.04View DealPrice comparison from BackmarketiPhone 15 Pro dealsMSRP: $999/999NewRefurbishedRetailerPrice1049View Deal1.069,00 View Deal1.099,00 View Deal1.245,00 View Deal1254.75View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPhone 15 Pro 128GB Titan Schwarz Ohne Vertrag795.80View DealPrice comparison from BackmarketiPhone 15 Pro Max dealsMSRP: $1,199/1,199NewRefurbishedRetailerPrice1.145,00 View Deal1.209,00 View Deal1.275,99 View Deal1.279,00 View Deal1.299,57 View Deal1.349,00 View Deal1.360,61 View Deal1.396,00 View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPhone 15 Pro Max 256GB Titan Natur Ohne Vertrag939.13View DealiPhone 15 Pro Max 256GB Titan Schwarz Ohne Vertrag972.79View DealPrice comparison from BackmarketiPhone 14 Pro Max dealsMSRP was: $1,099/1,149iPhone 14 Pro dealsMSRP was: $999/1049RetailerPrice0.01View DealPrice comparison from over 24,000 stores worldwideProductPricePrice comparison from BackmarketiPhone 14 dealsMSRP: $699/699NewRefurbishedRetailerPrice659View Deal669View Deal729View Deal650,00 View Deal655,00 View Deal665.76View Deal669,00 View Deal669,00 View Deal674,66 View Deal680,12 View Deal689,00 View Deal698,99 View Deal729.62View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPhone 14 128GB Gelb Ohne Vertrag519.43View DealPrice comparison from BackmarketiPhone SE (third-generation) dealsMSRP: $429/429RetailerPrice446View Deal449View Deal529View Deal387,00 View Deal411,00 View Deal430,00 View Deal443,00 View Deal443,00 View Deal449.15View Deal521.51View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPricePrice comparison from BackmarketBlack Friday 2024: Best deals for Apple productsCheck out these roundups for the best Apple deals:Apple Black Friday 2024 saleBest Black Friday 2024 Apple dealsBest Black Friday 2024 MacBook dealsBest Black Friday 2024 Mac DealsBest Black Friday 2024 AirPods dealsBest Black Friday 2024 Apple Watch dealsBest Black Friday 2024 iPad dealsBest Black Friday 2024 Mac monitor dealsBest Black Friday 2024 SSD and external hard drive dealsBest Black Friday 2024 Apple accessory deals0 Reacties 0 aandelen 61 Views
-
WWW.COMPUTERWORLD.COMOpenAIs SimpleQA tool for discerning genAI accuracy right message, wrong messengerIn the ongoing and potentially futile effort by CIOs to squeeze meaningful ROI out of their shiny, new generative AI (genAI) tools, there is no more powerful villain than hallucinations. It is what causes everyone to seriously wonder whether the analysis genAI delivers is valid and usable.From that perspective, I applaud OpenAI for trying to create a test to determine objective accuracy for genAI tools. But that effort called SimpleQA fails enterprise tech decision-makers in two ways. First, OpenAI is thelastbusiness any CIO would trust to determine the accuracy of the algorithms it is selling. Would you trust an app that determines the best place to shop from Walmart, Target or Amazon or perhaps a car evaluation tool from Toyota or GM?The second problem is that SimpleQA focuses on, well, simple stuff. It looks at objective and simple questions that ostensibly have only one correct answer. More to the point, the answer to those questions is easily determined and verified.That is just not how most enterprises want to use genAI technology. Eli Lilly and Pfizer want it to find new drug combinations to cure diseases. (Sorry, that should be treat. Treat makes companies money forever. Cures revenue is large, but ends far too quickly.) Yes, it would test those treatments afterwards, but that is a lot of wasted effort if genAI is wrong. Costco and Walgreens want to use it to find the most profitable places to build new stores. Boeing wants it to come up with more efficient ways to build aircraft.Lets delve into what OpenAI created. For starters, heresOpenAIs document.Ill put the companys comments into a better context.An open problem in artificial intelligence is how to train models that produce responses that are factually correct. Translation: We figured it would be nice to have it give a correct answer every now and then.Language models that generate more accurate responses with fewer hallucinations are more trustworthy and can be used in a broader range of applications. Translation: Call us hippies, if you must, but we brainstormed and concluded that our revenue could be improved if our product actually worked.Those flippant comments aside, I want to acknowledge that OpenAI makes a good faith effort here to come up with a basic way to evaluate precision where concrete answers can be ascertained. Setting aside how valuable that is in an enterprise setting, its a good start.But instead of creating the test itself, it would have been far more credible if it funded a trusted third-party consulting or analyst firm to do the work, with a firm hands-off policy so IT could trust that the testing was not biased in favor of OpenAIs offerings.Still, something is better than nothing, so lets look at what OpenAI said.SimpleQA is a simple, targeted evaluation for whether models know what they know (and give) responses (that) are easy to grade because questions are created such that there exists only a single, indisputable answer. Each answer in SimpleQA is graded as either correct, incorrect, or not attempted. A model with ideal behavior would get as many questions correct as possible while not attempting the questions for which it is not confident it knows the correct answer.If you think through why this approach works orseems like it would work it becomes clear why it might not be helpful. This approach suffers from a critical flawed assumption. If the model can accurately answer these questions, then that tells us that it will likely be able to answer other questions with the same accuracy.That might work with a calculator, but the nature of genAI hallucinations makes that assumption flawed. GenAI can easily get 10,000 questions correct and it might then wildly hallucinate for the next 50.The nature of hallucinations is that they tend to happen randomly with zero predictability. That is why spot-checking, which is pretty much what SimpleQA is trying to do, wont work here.To be more specific, it wouldnt be meaningful if genAI tools were to get all of the SimpleQA answers right. But the reverse isnt true. If the tested model gets all or most of the SimpleQA answers wrong, thatdoestell IT quite a bit. From the technologys perspective, the test seems unfair. If it gets an A, it will be ignored. If it gets an F, it will be believed. As the computer said inWarGames(a great movie to watch to see what a genAI system might do at the Pentagon), The only winning move is not to play.OpenAI pretty much concedes this in the report: In this work, we will sidestep the open-endedness of language models by considering only short, fact-seeking questions with a single answer. This reduction of scope is important because it makes measuring factuality much more tractable, albeit at the cost of leaving open research questions such as whether improved behavior on short-form factuality generalizes to long-form factuality.Later in the report, OpenAI elaborates: A main limitation with SimpleQA is that while it is accurate, it only measures factuality under the constrained setting of short, fact-seeking queries with a single, verifiable answer. Whether the ability to provide factual short answers correlates with the ability to write lengthy responses filled with numerous facts remains an open research question.Here are the specifics: SimpleQA consists of 4,326 short, fact-seeking questions.Another component of the SimpleQA test is that the question-writer bears much of the responsibility, rather than the answer-writer. One part of this criterion is that the question must specify the scope of the answer. For example, instead of asking Where did Barack and Michelle Obama meet which could have multiple answers such as Chicago or the law firm Sidley & Austin, questions had to specify which city or which company. Another common example is that instead of asking simply when, questions had to ask what year or what date.That nicely articulates why this wont likely be of use in the real world. Enterprise users are going to ask questions in an imprecise way. They have been sold on the promise of just use natural language and the system will figure out what you really mean through context. This test sidesteps that issue entirely.So, how can the results be meaningful or reliable?The very nature of hallucinations belies any way to quantify them. If they were predictable, IT could simply program their tools to ignore every 75th response. But its not. Until someone figures out how to truly eliminate hallucinations, the lack of reliable answers will stay with us.0 Reacties 0 aandelen 78 Views
-
WWW.COMPUTERWORLD.COM4 ways to use your phone as a webcam on WindowsLets be honest: Many Windows PCs dont have great webcams. The webcam is often an afterthought where manufacturers cut costs when putting together laptops. And, if you have a desktop PC, you might not even have a webcam at all unless you go out and buy one.But you almost certainly have multiple high-quality cameras built right into your smartphone of choice, whether you use an Android phone or an iPhone. And with the right bit of relatively simple setup, your smartphones high-end camera hardware can actually function as your PCs webcam, too.It might be just the secret to getting better video quality in your online meetings and other video calls no extra expenses required.Want to stay on top of the latest Windows PC features? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, youll get free in-depth Windows Field Guides as a special welcome bonus!Use an Android phone as a webcam on Windows 11 (wirelessly)Up first: If you have an Android phone and a Windows 11 PC, Microsoft now offers a built-in way to turn your phone into a camera. It all happens wirelessly, so you dont even need a USB cable. However, this does require Windows 11 Microsoft didnt add the new feature to Windows 10.To set this up, open the Settings app on Windows 11, select Bluetooth & devices, and click Mobile devices. Activate the Allow this PC to access your mobile devices option if it isnt already enabled. Then, click the Manage devices button.From here, add your Android phone if it isnt already connected to your PC. This will involve installing the Link to Windows app on your phone and signing in with the same Microsoft account you use on your PC.Once everything is set up, ensure the Use as a connected camera option is activated.If you have any trouble, try toggling the Enabled switch here to turn the connection off and back on again.Chris Hoffman, IDGNow, your Android phone will appear as a webcam in apps. (Want to test this? Try opening the Camera app built into Windows.)When you select it as a webcam, youll see a notification on your Android phone. Tap it to allow the connection. You can then use the app on your phone or the floating panel on your PC to change settings.Youll see a floating window where you can switch between your phones front and back camera while using it as a camera.Chris Hoffman, IDGTurn a Pixel phone into a Windows webcam via USBDo you have a Pixel phone? Google has a very convenient built-in way for your phone to function as a webcam no extra apps necessary. Heres what youll need:A Pixel 6, Pixel 7, Pixel 8, Pixel 9, or newer phone.A Windows 10 or Windows 11 PC.A USB cable to connect your phone to your PC.To get started with this, plug your phone into your Windows PC with a USB cable as if you were going to do an Android file transfer between your phone and the PC. Youll see an Android system notification talking about USB connection settings on your phone. Tap it and then tap Webcam under Use USB for.Your Pixel phone will then appear as a webcam to your Windows PC. You can select it as youd select any other webcam device in your video-conferencing application of choice.Pixel phones can easily function as USB webcams.Chris Hoffman, IDGSet up DroidCam for iPhone or AndroidYou can turn to a third-party app thatll allow your phone to double as a completely wireless Windows webcam. There are a variety of paid applications for this, but DroidCam stands out from the pack.Despite the name, this app works with both Android phones and iPhones! And its completely free at standard resolution. (You can get a Pro upgrade for a one-time $15 payment to enable higher-resolution video streaming.) And theres also a watermark unless you pay the fee. But the price is still a bargain compared to competing applications that charge higher prices or even ongoing subscription fees. As a useful professional tool, its very reasonable.To set up DroidCam, youll need to install the DroidCam app on your phone get it from Google Play for Android or the App Store for iPhone. Then install the DroidCam client app on your Windows PC. Launch the client app from the Start menu after its installed and follow the instructions to link the phone and PC apps.Heres another option: Reincubate Camo has a lot of good reviews, but youre looking at a $50 per year subscription for all the features rather than a one-time $15 payment.DroidCam works with both iPhones and Android phones as long as you have a Windows PC or Linux system.Chris Hoffman, IDGTry a phone manufacturer-specific Android appWhile Android phones from other manufacturers may not offer the convenient webcam-over-USB feature Google offers on its Pixel phones, they sometimes do have their own solutions.Samsung, for example, offers a camera sharing feature for Galaxy phones but it only works with specific laptops also made by Samsung. According to Samsungs website, you can only use the Galaxy camera sharing feature if you have a Galaxy Book5 Pro 360 Windows laptop from Samsung.If you have a Motorola phone, it might support Motorolas Smart Connect platform. If so, you can install Lenovos Smart Connect app (Lenovo owns Motorola) and use it to position your Motorola phone as a webcam from your PC.Overall, youre generally better off going with the more broadly applicable solutions, such as the ones I mentioned. But if your phone has a built-in option provided by the manufacturer and it works with your PC hardware which might be a tall order, as we see with the Galaxy phone example it could be worth considering.Who needs Apples Continuity Camera?Of course, if youre using an iPhone and a Mac, you can use Apples Continuity Camera instead. But Windows users have a lot of great options here, and the integrated solutions work well especially with Android devices.Oh, and theres one more simple solution worth noting: If you want to use your phone as a webcam in a video meeting with a service like Zoom, Microsoft Teams, or Google Meet, you could also just join the meeting directly from your phone. Your phone would function as your webcam. Then, you could participate in the meeting from your phone, without even involving your computer.While you dont get the full-screen video-meeting experience in that scenario, it can work well for a quick call and is a great option to turn to in a pinch.Want to make the most of your PC? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, youll get free copies of Paul Thurrotts Windows 11 and Windows 10 Field Guides (a $10 value) just for subscribing.0 Reacties 0 aandelen 75 Views
-
WWW.TECHNOLOGYREVIEW.COMThe AI lab waging a guerrilla war over exploitative AIBen Zhao remembers well the moment he officially jumped into the fight between artists and generative AI: when one artist asked for AI bananas.A computer security researcher at the University of Chicago, Zhao had made a name for himself by building tools to protect images from facial recognition technology. It was this work that caught the attention of Kim Van Deun, a fantasy illustrator who invited him to a Zoom call in November 2022 hosted by the Concept Art Association, an advocacy organization for artists working in commercial media.On the call, artists shared details of how they had been hurt by the generative AI boom, which was then brand new. At that moment, AI was suddenly everywhere. The tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAIs DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.But these artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work. Some had found that their art had been scraped off the internet and used to train the models, while others had discovered that their own names had become prompts, causing their work to be drowned out online by AI knockoffs.Zhao remembers being shocked by what he heard. People are literally telling you theyre losing their livelihoods, he told me one afternoon this spring, sitting in his Chicago living room. Thats something that you just cant ignore.So on the Zoom, he made a proposal: What if, hypothetically, it was possible to build a mechanism that would help mask their art to interfere with AI scraping?I would love a tool that if someone wrote my name and made a prompt, like, garbage came out, responded Karla Ortiz, a prominent digital artist. Just, like, bananas or some weird stuff.That was all the convincing Zhao neededthe moment he joined the cause.Fast-forward to today, and millions of artists have deployed two tools born from that Zoom: Glaze and Nightshade, which were developed by Zhao and the University of Chicagos SAND Lab (an acronym for security, algorithms, networking, and data).Arguably the most prominent weapons in an artists arsenal against nonconsensual AI scraping, Glaze and Nightshade work in similar ways: by adding what the researchers call barely perceptible perturbations to an images pixels so that machine-learning models cannot read them properly. Glaze, which has been downloaded more than 6 million times since it launched in March 2023, adds whats effectively a secret cloak to images that prevents AI algorithms from picking up on and copying an artists style. Nightshade, which I wrote about when it was released almost exactly a year ago this fall, cranks up the offensive against AI companies by adding an invisible layer of poison to images, which can break AI models; it has been downloaded more than 1.6 million times.Thanks to the tools, Im able to post my work online, Ortiz says, and thats pretty huge. For artists like her, being seen online is crucial to getting more work. If they are uncomfortable about ending up in a massive for-profit AI model without compensation, the only option is to delete their work from the internet. That would mean career suicide. Its really dire for us, adds Ortiz, who has become one of the most vocal advocates for fellow artists and is part of a class action lawsuit against AI companies, including Stability AI, over copyright infringement.But Zhao hopes that the tools will do more than empower individual artists. Glaze and Nightshade are part of what he sees as a battle to slowly tilt the balance of power from large corporations back to individual creators.It is just incredibly frustrating to see human life be valued so little, he says with a disdain that Ive come to see as pretty typical for him, particularly when hes talking about Big Tech. And to see that repeated over and over, this prioritization of profit over humanity it is just incredibly frustrating and maddening.As the tools are adopted more widely, his lofty goal is being put to the test. Can Glaze and Nightshade make genuine security accessible for creatorsor will they inadvertently lull artists into believing their work is safe, even as the tools themselves become targets for haters and hackers? While experts largely agree that the approach is effective and Nightshade could prove to be powerful poison, other researchers claim theyve already poked holes in the protections offered by Glaze and that trusting these tools is risky.But Neil Turkewitz, a copyright lawyer who used to work at the Recording Industry Association of America, offers a more sweeping view of the fight the SAND Lab has joined. Its not about a single AI company or a single individual, he says: Its about defining the rules of the world we want to inhabit.Poking the bearThe SAND Lab is tight knit, encompassing a dozen or so researchers crammed into a corner of the University of Chicagos computer science building. That space has accumulated somewhat typical workplace detritusa Meta Quest headset here, silly photos of dress-up from Halloween parties there. But the walls are also covered in original art pieces, including a framed painting by Ortiz.Years before fighting alongside artists like Ortiz against AI bros (to use Zhaos words), Zhao and the labs co-leader, Heather Zheng, who is also his wife, had built a record of combating harms posed by new tech.When I visited the SAND Lab in Chicago, I saw how tight knit the group was. Alongside the typical workplace stuff were funny Halloween photos like this one. (Front row: Ronik Bhaskar, Josephine Passananti, Anna YJ Ha, Zhuolin Yang, Ben Zhao, Heather Zheng. Back row: Cathy Yuanchen Li, Wenxin Ding, Stanley Wu, and Shawn Shan.)COURTESY OF SAND LABThough both earned spots on MIT Technology Reviews 35 Innovators Under 35 list for other work nearly two decades ago, when they were at the University of California, Santa Barbara (Zheng in 2005 for cognitive radios and Zhao a year later for peer-to-peer networks), their primary research focus has become security and privacy.The pair left Santa Barbara in 2017, after they were poached by the new co-director of the University of Chicagos Data Science Institute, Michael Franklin. All eight PhD students from their UC Santa Barbara lab decided to follow them to Chicago too. Since then, the group has developed a bracelet of silence that jams the microphones in AI voice assistants like the Amazon Echo. It has also created a tool called Fawkesprivacy armor, as Zhao put it in a 2020 interview with the New York Timesthat people can apply to their photos to protect them from facial recognition software. Theyve also studied how hackers might steal sensitive information through stealth attacks on virtual-reality headsets, and how to distinguish human art from AI-generated images.Ben and Heather and their group are kind of unique because theyre actually trying to build technology that hits right at some key questions about AI and how it is used, Franklin tells me. Theyre doing it not just by asking those questions, but by actually building technology that forces those questions to the forefront.It was Fawkes that intrigued Van Deun, the fantasy illustrator, two years ago; she hoped something similar might work as protection against generative AI, which is why she extended that fateful invite to the Concept Art Associations Zoom call.That call started something of a mad rush in the weeks that followed. Though Zhao and Zheng collaborate on all the labs projects, they each lead individual initiatives; Zhao took on what would become Glaze, with PhD student Shawn Shan (who was on this years Innovators Under 35 list) spearheading the development of the programs algorithm.In parallel to Shans coding, PhD students Jenna Cryan and Emily Wenger sought to learn more about the views and needs of the artists themselves. They created a user survey that the team distributed to artists with the help of Ortiz. In replies from more than 1,200 artistsfar more than the average number of responses to user studies in computer sciencethe team found that the vast majority of creators had read about art being used to train models, and 97% expected AI to decrease some artists job security. A quarter said AI art had already affected their jobs.Almost all artists also said they posted their work online, and more than half said they anticipated reducing or removing that online work, if they hadnt alreadyno matter the professional and financial consequences.The first scrappy version of Glaze was developed in just a month, at which point Ortiz gave the team her entire catalogue of work to test the model on. At the most basic level, Glaze acts as a defensive shield. Its algorithm identifies features from the image that make up an artists individual style and adds subtle changes to them. When an AI model is trained on images protected with Glaze, the model will not be able to reproduce styles similar to the original image.A painting from Ortiz later became the first image publicly released with Glaze on it: a young woman, surrounded by flying eagles, holding up a wreath. Its title is Musa Victoriosa, victorious muse.Its the one currently hanging on the SAND Labs walls. View this post on Instagram A post shared by Karla Ortiz (@kortizart) Despite many artists initial enthusiasm, Zhao says, Glazes launch caused significant backlash. Some artists were skeptical because they were worried this was a scam or yet another data-harvesting campaign.The lab had to take several steps to build trust, such as offering the option to download the Glaze app so that it adds the protective layer offline, which meant no data was being transferred anywhere. (The images are then shielded when artists upload them.)Soon after Glazes launch, Shan also led the development of the second tool, Nightshade. Where Glaze is a defensive mechanism, Nightshade was designed to act as an offensive deterrent to nonconsensual training. It works by changing the pixels of images in ways that are not noticeable to the human eye but manipulate machine-learning models so they interpret the image as something different from what it actually shows. If poisoned samples are scraped into AI training sets, these samples trick the AI models: Dogs become cats, handbags become toasters. The researchers say only a relatively few examples are enough to permanently damage the way a generative AI model produces images.Currently, both tools are available as free apps or can be applied through the projects website. The lab has also recently expanded its reach by offering integration with the new artist-supported social network Cara, which was born out of a backlash to exploitative AI training and forbids AI-produced content.In dozens of conversations with Zhao and the labs researchers, as well as a handful of their artist-collaborators, its become clear that both groups now feel they are aligned in one mission. I never expected to become friends with scientists in Chicago, says Eva Toorenent, a Dutch artist who worked closely with the team on Nightshade. Im just so happy to have met these people during this collective battle.Images online of Toorenents Belladonna have been treated with the SAND Labs Nightshade tool.EVA TOORENENTHer painting Belladonna, which is also another name for the nightshade plant, was the first image with Nightshades poison on it.Its so symbolic, she says. People taking our work without our consent, and then taking our work without consent can ruin their models. Its just poetic justice.No perfect solutionThe reception of the SAND Labs work has been less harmonious across the AI community.After Glaze was made available to the public, Zhao tells me, someone reported it to sites like VirusTotal, which tracks malware, so that it was flagged by antivirus programs. Several people also started claiming on social media that the tool had quickly been broken. Nightshade similarly got a fair share of criticism when it launched; as TechCrunch reported in January, some called it a virus and, as the story explains, another Reddit user who inadvertently went viral on X questioned Nightshades legality, comparing it to hacking a vulnerable computer system to disrupt its operation.We had no idea what we were up against, Zhao tells me. Not knowing who or what the other side could be meant that every single new buzzing of the phone meant that maybe someone did break Glaze.Both tools, though, have gone through rigorous academic peer review and have won recognition from the computer security community. Nightshade was accepted at the IEEE Symposium on Security and Privacy, and Glaze received a distinguished paper award and the 2023 Internet Defense Prize at the Usenix Security Symposium, a top conference in the field.In my experience working with poison, I think [Nightshade is] pretty effective, says Nathalie Baracaldo, who leads the AI security and privacy solutions team at IBM and has studied data poisoning. I have not seen anything yetand the word yet is important herethat breaks that type of defense that Ben is proposing. And the fact that the team has released the source code for Nightshade for others to probe, and it hasnt been broken, also suggests its quite secure, she adds.At the same time, at least one team of researchers does claim to have penetrated the protections of Glaze, or at least an old version of it.As researchers from Google DeepMind and ETH Zurich detailed in a paper published in June, they found various ways Glaze (as well as similar but less popular protection tools, such as Mist and Anti-DreamBooth) could be circumvented using off-the-shelf techniques that anyone could accesssuch as image upscaling, meaning filling in pixels to increase the resolution of an image as its enlarged. The researchers write that their work shows the brittleness of existing protections and warn that artists may believe they are effective. But our experiments show they are not.Florian Tramr, an associate professor at ETH Zurich who was part of the study, acknowledges that it is very hard to come up with a strong technical solution that ends up really making a difference here. Rather than any individual tool, he ultimately advocates for an almost certainly unrealistic ideal: stronger policies and laws to help create an environment in which people commit to buying only human-created art.What happened here is common in security research, notes Baracaldo: A defense is proposed, an adversary breaks it, andideallythe defender learns from the adversary and makes the defense better. Its important to have both ethical attackers and defenders working together to make our AI systems safer, she says, adding that ideally, all defenses should be publicly available for scrutiny, which would both allow for transparency and help avoid creating a false sense of security. (Zhao, though, tells me the researchers have no intention to release Glazes source code.)Still, even as all these researchers claim to support artists and their art, such tests hit a nerve for Zhao. In Discord chats that were later leaked, he claimed that one of the researchers from the ETH ZurichGoogle DeepMind team doesnt give a shit about people. (That researcher did not respond to a request for comment, but in a blog post he said it was important to break defenses in order to know how to fix them. Zhao says his words were taken out of context.)Zhao also emphasizes to me that the papers authors mainly evaluated an earlier version of Glaze; he says its new update is more resistant to tampering. Messing with images that have current Glaze protections would harm the very style that is being copied, he says, making such an attack useless.This back-and-forth reflects a significant tension in the computer security community and, more broadly, the often adversarial relationship between different groups in AI. Is it wrong to give people the feeling of security when the protections youve offered might break? Or is it better to have some level of protectionone that raises the threshold for an attacker to inflict harmthan nothing at all?Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, says there are plenty of examples where similar technical protections have failed to be bulletproof. For example, in 2023, de Montjoye and his team probed a digital mask for facial recognition algorithms, which was meant to protect the privacy of medical patients facial images; they were able to break the protections by tweaking just one thing in the programs algorithm (which was open source).Using such defenses is still sending a message, he says, and adding some friction to data profiling. Tools such as TrackMeNotwhich protects users from data profilinghave been presented as a way to protest; as a way to say I do not consent.But at the same time, he argues, we need to be very clear with artists that it is removable and might not protect against future algorithms.While Zhao will admit that the researchers pointed out some of Glazes weak spots, he unsurprisingly remains confident that Glaze and Nightshade are worth deploying, given that security tools are never perfect. Indeed, as Baracaldo points out, the Google DeepMind and ETH Zurich researchers showed how a highly motivated and sophisticated adversary will almost certainly always find a way in.Yet it is simplistic to think that if you have a real security problem in the wild and youre trying to design a protection tool, the answer should be it either works perfectly or dont deploy it, Zhao says, citing spam filters and firewalls as examples. Defense is a constant cat-and-mouse game. And he believes most artists are savvy enough to understand the risk.Offering hopeThe fight between creators and AI companies is fierce. The current paradigm in AI is to build bigger and bigger models, and there is, at least currently, no getting around the fact that they require vast data sets hoovered from the internet to train on. Tech companies argue that anything on the public internet is fair game, and that it is impossible to build advanced AI tools without copyrighted material; many artists argue that tech companies have stolen their intellectual property and violated copyright law, and that they need ways to keep their individual works out of the modelsor at least receive proper credit and compensation for their use.So far, the creatives arent exactly winning. A number of companies have already replaced designers, copywriters, and illustrators with AI systems. In one high-profile case, Marvel Studios used AI-generated imagery instead of human-created art in the title sequence of its 2023 TV series Secret Invasion. In another, a radio station fired its human presenters and replaced them with AI. The technology has become a major bone of contention between unions and film, TV, and creative studios, most recently leading to a strike by video-game performers. There are numerous ongoing lawsuits by artists, writers, publishers, and record labels against AI companies. It will likely take years until there is a clear-cut legal resolution. But even a court ruling wont necessarily untangle the difficult ethical questions created by generative AI. Any future government regulation is not likely to either, if it ever materializes.Thats why Zhao and Zheng see Glaze and Nightshade as necessary interventionstools to defend original work, attack those who would help themselves to it, and, at the very least, buy artists some time. Having a perfect solution is not really the point. The researchers need to offer something now because the AI sector moves at breakneck speed, Zheng says, means that companies are ignoring very real harms to humans. This is probably the first time in our entire technology careers that we actually see this much conflict, she adds.On a much grander scale, she and Zhao tell me they hope that Glaze and Nightshade will eventually have the power to overhaul how AI companies use art and how their products produce it. It is eye-wateringly expensive to train AI models, and its extremely laborious for engineers to find and purge poisoned samples in a data set of billions of images. Theoretically, if there are enough Nightshaded images on the internet and tech companies see their models breaking as a result, it could push developers to the negotiating table to bargain over licensing and fair compensation.Thats, of course, still a big if. MIT Technology Review reached out to several AI companies, such as Midjourney and Stability AI, which did not reply to requests for comment. A spokesperson for OpenAI, meanwhile, did not confirm any details about encountering data poison but said the company takes the safety of its products seriously and is continually improving its safety measures: We are always working on how we can make our systems more robust against this type of abuse.In the meantime, the SAND Lab is moving ahead and looking into funding from foundations and nonprofits to keep the project going. They also say there has also been interest from major companies looking to protect their intellectual property (though they decline to say which), and Zhao and Zheng are exploring how the tools could be applied in other industries, such as gaming, videos, or music. In the meantime, they plan to keep updating Glaze and Nightshade to be as robust as possible, working closely with the students in the Chicago labwhere, on another wall, hangs Toorenents Belladonna. The painting has a heart-shaped note stuck to the bottom right corner: Thank you! You have given hope to us artists.This story has been updated with the latest download figures for Glaze and Nightshade.0 Reacties 0 aandelen 119 Views
-
WWW.TECHNOLOGYREVIEW.COMGenerative AI taught a robot dog to scramble around a new environmentTeaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but thats scarce and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when theyre pulled out of virtual worlds and asked to do the same tasks in the real one.Now theres a potentially better option: a new system that uses generative AI models in conjunction with a physics simulator to develop virtual training grounds that more accurately mirror the physical world. Robots trained using this method achieved a higher success rate in real-world tests than those trained using more traditional techniques.Researchers used the system, called LucidSim, to train a robot dog in parkour, getting it to scramble over a box and climb stairs even though it had never seen any real-world data. The approach demonstrates how helpful generative AI could be when it comes to teaching robots to do challenging tasks. It also raises the possibility that we could ultimately train them in entirely virtual worlds. The research was presented at the Conference on Robot Learning (CoRL) last week.Were in the middle of an industrial revolution for robotics, says Ge Yang, a postdoc at MITs Computer Science and Artificial Intelligence Laboratory, who worked on the project. This is our attempt at understanding the impact of these [generative AI] models outside of their original intended purposes, with the hope that it will lead us to the next generation of tools and models.LucidSim uses a combination of generative AI models to create the visual training data. First the researchers generated thousands of prompts for ChatGPT, getting it to create descriptions of a range of environments that represent the conditions the robot would encounter in the real world, including different types of weather, times of day, and lighting conditions. These included an ancient alley lined with tea houses and small, quaint shops, each displaying traditional ornaments and calligraphy and the sun illuminates a somewhat unkempt lawn dotted with dry patches.These descriptions were fed into a system that maps 3D geometry and physics data onto AI-generated images, creating short videos mapping a trajectory for the robot to follow. The robot draws on this information to work out the height, width, and depth of the things it has to navigatea box or a set of stairs, for example.The researchers tested LucidSim by instructing a four-legged robot equipped with a webcam to complete several tasks, including locating a traffic cone or soccer ball, climbing over a box, and walking up and down stairs. The robot performed consistently better than when it ran a system trained on traditional simulations. In 20 trials to locate the cone, LucidSim had a 100% success rate, versus 70% for systems trained on standard simulations. Similarly, LucidSim reached the soccer ball in another 20 trials 85% of the time, and just 35% for the other system.Finally, when the robot was running LucidSim, it successfully completed all 10 stair-climbing trials, compared with just 50% for the other system.From left: Phillip Isola, Ge Yang, and Alan YuCOURTESY OF MIT CSAILThese results are likely to improve even further in the future if LucidSim draws directly from sophisticated generative video models rather than a rigged-together combination of language, image, and physics models, says Phillip Isola, an associate professor at MIT who worked on the research.The researchers approach to using generative AI is a novel one that will pave the way for more interesting new research, says Mahi Shafiullah, a PhD student at New York University who is using AI models to train robots. He did not work on the project.The more interesting direction I see personally is a mix of both real and realistic imagined data that can help our current data-hungry methods scale quicker and better, he says.The ability to train a robot from scratch purely on AI-generated situations and scenarios is a significant achievement and could extend beyond machines to more generalized AI agents, says Zafeirios Fountas, a senior research scientist at Huawei specializing in braininspired AI.The term robots here is used very generally; were talking about some sort of AI that interacts with the real world, he says. I can imagine this being used to control any sort of visual information, from robots and self-driving cars up to controlling your computer screen or smartphone.In terms of next steps, the authors are interested in trying to train a humanoid robot using wholly synthetic datawhich they acknowledge is an ambitious goal, as bipedal robots are typically less stable than their four-legged counterparts. Theyre also turning their attention to another new challenge: using LucidSim to train the kinds of robotic arms that work in factories and kitchens. The tasks they have to perform require a lot more dexterity and physical understanding than running around a landscape.To actually pick up a cup of coffee and pour it is a very hard, open problem, says Isola. If we could take a simulation thats been augmented with generative AI to create a lot of diversity and train a very robust agent that can operate in a caf, I think that would be very cool.0 Reacties 0 aandelen 121 Views