• B&H Promo Codes and Deals for November 2024
    www.wired.com
    Enjoy top deals on cameras, computers, and tech essentials at B&H Photo.
    0 Commentaires ·0 Parts ·107 Vue
  • Thousands Report Netflix Livestream Crashes During Mike Tyson-Jake Paul Fight
    www.nytimes.com
    Users across the U.S. reported being unable to load the high-profile boxing match.
    0 Commentaires ·0 Parts ·149 Vue
  • How Elon Musk Cuts Costs at Tesla, SpaceX and X
    www.nytimes.com
    Mr. Musk dug into his companies budgets, preferring to cut too much rather than too little and to deal with the fallout later. Under Donald Trump, he is set to apply those tactics to the U.S. government.
    0 Commentaires ·0 Parts ·160 Vue
  • The M4 MacBook Pro has a secret display upgrade Apple didnt tell us about
    www.macworld.com
    MacworldI recently mused about how the Mac lineup has been the most compelling for a while, and one of the reasons is that Apple introduced worthwhile features besides the chip upgrade to the M4. For the M4 MacBook Pro, theres one more feature that Apple hasnt mentioned at all.According to display expert Ross Young, Apple now uses a quantum dot film in the MacBook Pros LiquidRetina XDR display. This QD film replaces a KSF phosphor film (also known as a narrow-band red phosphor) placed between the backlight and the display. The result is that the MacBook Pros display creates a more consistent color gamut and better motion performance. There are also environmental benefits as Young points out:Big Apple display news, they have adopted quantum dots for the first time. The latest MacBook Pros (M4) use a quantum dot (QD) film rather than a red KSF phosphor film.In the past, Apple went with the KSF solution due to better efficiency and lack of cadmium (Cd), but the latest Cd-free QD films are very efficient, feature as good or better color gamut and better motion performance.Ross YoungRoss Young (@DSCCRoss) via X, November 14, 2024QD films are often used on high-end TVs and displays. So why the switch now? Apple hasnt talked about this so its mere speculation, but based on Youngs post, it appears that Apple wasnt satisfied with QD films previous performance. Why didnt Apple mention this change? Most likely because its the kind of under-the-hood change that wont factor much into a buying decision. But its a notable change that shows how Apple continues to improve its high-end devices without raising costs.Itll be interesting to see if Apple adopts QD film for the M4 MacBook Air, which uses an LED display, instead of the mini-LED display in the MacBook Pro. Reports say that the new Air will be revealed in the spring of 2025 and its expected to be nothing more than a chip upgrade.
    0 Commentaires ·0 Parts ·128 Vue
  • Every iPad is on sale for big savings this Black Friday
    www.macworld.com
    MacworldBlack Friday is nearly here and its a fantastic time to pick up a deal on an iPad. While technically a single day (its the day after Thanksgiving in the U.S. or November 29) Black Friday has gradually expanded to encompass Cyber Monday (December 2), as well as the weeks leading up to the big weekend.This means you need to keep your eyes peeled all month long for killer iPad savings. Or, better still, let us keep our eyes peeled for you. This article is where we round up the best-value deals on the various iPad models available in both the U.S. and the U.K., as well as provide automated price comparison tables that will pull in the lowest prices across all the major retailers. Bookmark this page and check back for the best Black Friday bargains.Below you can get an idea of what to expect heading into Black Friday, and well be updating this page as more discounts start to come in and as Apple announces its Black Friday shopping event. In the meantime, you should also check out our roundup of the best Apple deals that we keep updated all year round, plus our always advice on the best iPad deals.Black Friday: Apples shopping eventEvery year Apple holds a shopping event from Black Friday (November 29) to Cyber Monday (December 2). However, since Apple rarely discounts its products, the event consists of gift card offers rather than actual savings.In 2023 you could get gift cards for the following voucher amounts with iPads purchased from Apples site.iPad Pro$5080iPad Air$7560iPad mini$5040iPad (10th gen.)$5040This year, we expect the iPad Air, iPad Pro, and 10th-gen iPad to be included in the sale. Since the iPad mini was only just refreshed in October, we dont think Apple will include it.Best iPad deals for Black Friday 2024As we head into Black Friday, Amazon, Best Buy, and other retailers are already discounting iPads and we expect even greater savings during the Black Friday weekend. Here are the best prices weve found so far:10th-gen iPad dealsThe 10th-gen iPad is the only iPad that wasnt updated in 2024. However, Apple cut the price by $100 to $349, making it a far more attractive purchase. Its the only iPad that cant run Apple Intelligence, but it still gets our full recommendation for anyone on a budget.U.S.Amazon, 10th gen iPad (64GB): $309 ($40 off, MRSP $349)Amazon, 10th-gen iPad (256GB): $449 ($50 off, MSRP $499)Amazon, 10th gen iPad (256GB): $449 ($50 off, MSRP $499)Amazon, 9th-gen iPad (64GB): $199 (Discounted price, was $329)U.K.Amazon,10th-gen iPad (64GB): 308.97 (20 off, RRP 329)Amazon, 10th gen iPad (256GB): 449 (30 off, RRP 479)iPad mini dealsThe iPad mini was updated in October with an A17 Pro processor and 8GB of RAM so its able to run Apple Intelligence, as well as double the storage, from 64GB to 128GB. Otherwise, the display and design are the same. If you dont care about Apple Intelligence, you can find a great price on an older A15 model.U.S.Amazon, iPad mini (A17 Pro, 128GB): $479 ($20 off, MSRP $499)Amazon, iPad mini (A17 Pro, 512GB): $700 ($99 off with coupon, MSRP $799)Amazon, iPad mini (A15, 64GB): $350 ($149 off, MSRP $499 Clearance)U.K.KRCS, iPad mini (2024, A17 Pro, 128GB): 489.02 (9.98 off, RRP 499)KRCS, iPad mini (2024, A17 Pro, 256GB) 587.02 (11.98 off, RRP 599)KRCS, iPad mini (2024, A17 Pro, 512GB): 783.02 (15.98 off, RRP 799)John Lewis, 9th gen iPad (64GB): 279 (RRP was 369 Clearance)iPad Air dealsThe newest iPad Air now comes in two sizes, 11 inches and 13 inches like the iPad Pro. It has an M2 processor and a repositioned front camera. Otherwise, its the same as the M1 model, which we loved.U.S.Amazon,11-inch M2 iPad Air (128GB): $549 ($50 off, MSRP $599)Amazon, 11-inch M2 iPad Air (512GB): $824 ($75 off, MSRP $899)Amazon, 13-inch M2 iPad Air (512GB): $899 ($200 off, MSRP $1,099)Amazon,13-inch M2 iPad Air (128GB): $739 ($60 off with coupon, MSRP $799)U.K.Amazon, 11-inch iPad Air (M2, 128GB): 559.97 (39 off, RRP 599)Amazon, 11-inch iPad Air (M2, 512GB): 843.97 (55 off, RRP 899)Amazon, 13-inch iPad Air (M2, 128GB): 749 (50 off, RRP 799)Amazon, 13-inch iPad Air (M2, 512GB): 1,029.97 (70 off, RRP 1,099)iPad Pro dealsThe iPad Pro was updated this spring with a new thinner design built around an M4 processor and OLED display. Its the absolute best tablet Apple (or anyone else) makes, but its also very expensive and probably more than most people need.U.S.B&H Photo, 11-inch M4 iPad Pro (512GB, 8GB RAM): $1,099 ($100 off, MSRP $1,199)Amazon,11-inch M4 iPad Pro (2TB, 16GB RAM): $1,799 ($200 off, MSRP $1,999)Amazon,13-inch M4 iPad Pro (2TB, 16GB RAM): $2,099 ($200 off, MSRP $2,299)U.K.KRCS, 11-inch M4 iPad Pro (256GB): 979.02 (19.98 off, RRP 999)Amazon, 11-inch M4 iPad Pro (512GB): 1,149 (50 off, RRP 1,199)John Lewis, 13-inch M4 iPad Pro (256GB): 1,249 (50 off, RRP 1,299)Amazon, 13-inch M4 iPad Pro (512GB): 1,419.99 (79 off, RRP 1,499)iPad Accessory Black Friday dealsU.S.Amazon, Magic Keyboard (11-inch): $199 ($100 off, MSRP $299)Amazon, Magic Keyboard (12.9-inch): $299 ($50 off, MSRP $349)Amazon, Apple Pencil (1st gen): $79 ($20 off, MSRP $99)Amazon, Apple Pencil (2nd gen): $89 ($40 off, MSRP $129)Amazon, Apple Pencil (USB-C): $71 ($8 off, MSRP $79)U.K.Amazon, Magic Keyboard (11-inch): 256 (63 off, RRP 319)Amazon, Apple Pencil (2nd gen): 89 (50 off, RRP 139)For a more comprehensive guide to the available dealsas well as any deals that have popped up since we last updated this articlesee our automated tables below for the lowest prices currently available on each iPad model. If youre not sure which model is right for you, read our in-depth iPad buying guide.Latest 10th-gen iPad deals ($449/499)RetailerPrice0.01View Deal398View Deal398View Deal399View Deal401View Deal366,00 View Deal369,00 View Deal370,00 View Deal376,32 View Deal379,00 View Deal379,00 View Deal381,42 View Deal396,95 View Deal456.15View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPricePrice comparison from BackmarketLatest 9th-gen iPad deals ($329/369)NewRefurbishedRetailerPrice0.01View Deal15.9View Deal370View Deal14.99View Deal35.89View Deal298,95 View Deal299,00 View Deal299,00 View Deal299,99 View Deal327,00 View Deal362,99 View Deal364,26 View Deal370,00 View Deal370,00 View Deal397.52View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPad 10.2 (2021) 9. Generation 64 GB WLAN Space Grau234.06View DealPrice comparison from BackmarketLatest iPad mini deals (499/569)RetailerPrice579View Deal599View Deal599View Deal599View Deal509,00 View Deal557,41 View Deal559,62 View Deal564,54 View Deal569,99 View Deal569,99 View Deal599,00 View Deal599,00 View Deal638.42View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPricePrice comparison from BackmarketLatest 11-inch M2 iPad Air deals ($599/599)NewRefurbishedRetailerPrice639View Deal659View Deal679View Deal699View Deal599,00 View Deal599,00 View Deal639,00 View Deal645,72 View Deal646,77 View Deal649,00 View Deal649,00 View Deal649.15View Deal658,64 View Deal699,00 View Deal699,00 View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPad Air M2 11 (2024) 6. Generation 128 GB WLAN Space Grau937.37View DealPrice comparison from BackmarketLatest 13-inch M2 iPad Air deals ($799/799)RetailerPrice875View Deal899View Deal899View Deal949View Deal809,00 View Deal809,00 View Deal854,94 View Deal855,35 View Deal870,47 View Deal875,00 View Deal879,86 View Deal893,94 View Deal899,99 View Deal899,99 View Deal1022.46View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPricePrice comparison from BackmarketLatest 11-inch M4 iPad Pro deals ($999/999)NewRefurbishedRetailerPrice1329View Deal1359View Deal1.299,00 View Deal1.299,81 View Deal1.322,39 View Deal1.329,00 View Deal1.329,00 View Deal1.338,77 View Deal1.364,48 View Deal1.444,17 View Deal1.445,10 View Deal1.449,00 View Deal1516.14View Deal1.559,99 View Deal1559.99View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPad Pro 11 M4 (2024) 5. Generation 256 GB WLAN + 5G Space Schwarz1030.84View DealPrice comparison from BackmarketLatest 13-inch M4 iPad Pro deals (MSRP $1,299/1,299)NewRefurbishedRetailerPrice1,45 View Deal1319View Deal1.319,00 View Deal1.333,69 View Deal1.389,00 View Deal1.440,00 View Deal1.449,00 View DealPrice comparison from over 24,000 stores worldwideView more pricesProductPriceiPad Pro 12.9 (2022) 6. Generation 128 GB WLAN Space Grau753.66View DealPrice comparison from BackmarketBlack Friday 2024: Best deals for Apple productsCheck out these roundups for the best Apple deals:Best Black Friday 2024 Apple accessory dealsApple Black Friday 2024 saleBest Black Friday 2024 Apple dealsBest Black Friday 2024 Mac DealsBest Black Friday 2024 MacBook dealsBest Black Friday 2024 AirPods dealsBest Black Friday 2024 Apple Watch dealsBest Black Friday 2024 iPad dealsBest Black Friday 2024 iPhone dealsBest Black Friday 2024 Mac monitor dealsBest Black Friday 2024 SSD and external hard drive deals
    0 Commentaires ·0 Parts ·129 Vue
  • Googles Gemini app is now available on iPhones
    www.computerworld.com
    Google has entered a new and more intense phase of the AI wars, introducing its own Google Gemini app for iPhones; now you can use Apple Intelligence, ChatGPT, Microsoft Copilot and Google Gemini on one device. Only one of those services tries to give you what you needwithout gathering too much information about you.What is Gemini?Like most Google services, Google Gemini seems free, in that you dont need to part with any cash credits to use it. Open it up, and youll find a chat window that also lets you get to a list of your previous chats. Speaking to Gemini is simple text, voice, or even use a camera to point at something and youll get some answers. In other words, the app integrates the same features as youll find on the Gemini website, but its an app so that makes it cool.Probably.There is one more thing access to the more conversational Gemini Live bot, which works a little likeChatGPTin voice mode. You can even assign access to Gemini as a shortcut on your iPhones Action button for fast access to the bot, which can also access and control any Google apps youre brave enough to install on your iPhone.All about GoogleAnd thats the thing, really. Like so much coming out of Silicon Valley now, Google Gemini is self-referencing.You use Google on your iPhone to speak to a Google AI and access Google services, which gives you a more Android-like experience if you happen to have migrated to iOS from Android. You can use Gemini on your iPhone to control YouTube Music, for example, and youll get Google Maps if you ask for directions.You even get supplementaryprivacy agreementsfor all those apps, some of which deliver exactly what you expect from Google the ads sales company, which is probably a little different than the privacy-first Apple experience you thought you were using. Gemini does put some protection in place, but your location data, feedback, and usage information can be reviewed by humans. Most people wont know this. Most people dont read privacy agreements before accepting them. They should but they are long, boring, and archaically written for a reason.AI tribalismIf art reflects life and tech is indeed the new creativity, then the emergence of these equal but different digital tribes reflects the deeper tribalism that seems to be impacting every other part of life. Is that a good thing? Perhaps that depends on which state you live in.At the end of days, Gemini on iPhone is your gateway to Google world, just as Windows takes you to Microsoft planet and Apple takes you to its owndistorted reality, (subject to the EU). There are other tech worlds too, but this isnt intended to be a definitive list of differing digital existences, especially now that these altered states have become both cloud- and service-based. Its a battle playing out on every platform and on every device.After all, if your primary computing experience becomes text- and voice-based, and the processors handling your requests are in the cloud, then it matters less which platform you use, as long as you get something you need. (Its only later well find that we get slightly less than what we need, with the difference between the two being the profit margin.)Apples approach is tosupport those external services while building up its own AI suite with its own unique and, if you ask me,vitally necessaryselling point around privacy. Others follow a different path, but its hard to ignore that control of your computational experience is the root of all these ambitions.King of the hillWith its early mover advantage, OpenAI is not blind to the battle. Just this week it introduced support for different applications across Windows and Mac desktops. In aNov. 14 message on X(for whomeverremains genuinely active there), Open AI announced: ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers.That means it will try to help when working in applications such as VS Code, Xcode, and Terminal. While you work, you can speak with the bot, get screenshots, share files and more. There is, of course, also a ChatGPT app for iPhones, and the first comparative reviews of the experience of using both Gemini and ChatGPT on an Apple device showpros and consto both. Downstream vendors,most recently including Jamf, are relying on tools provided by the larger vendors to add useful tools to their own.Google and OpenAI are not alone. Just last month, Microsoft introducedCopilot Vision, which it describes as autonomous agents capable of handling tasks and business functions, so you dont need to. Apple, of course, remains high on its recent introduction ofApple Intelligence.Things will get better before becoming worseIts a clash of the tech titans. And like every clash of the tech titans so far this century, you or your business are the product the titans are fighting for. That raises other questions such as how will they monetize your experience of AI.How high will energy prices climb as a direct result of the spiraling electricity demands of these services? At what point will AI eat itself, creating emails from spoken summaries that are then in turn summarized by AI? When it comes to security and privacy, is even sovereign AI truly secure enough for use in regulated enterprise? Justhow secure are Apples own AI servers? And once the dominant players in the New AI Empire finally emerge, how, just how, will they do what Big Tech always does and followDoctorows orders?You can follow me on social media! Youll find me onBlueSky, LinkedIn,Mastodon, andMeWe.
    0 Commentaires ·0 Parts ·150 Vue
  • O2 unleashes AI grandma on scammers
    www.computerworld.com
    Research by British telecommunications provider O2 has found that seven in ten Britons (71 percent) would like to take revenge on scammers who have tried to trick them or their loved ones. At the same time, however, one in two people does not want to waste their time on it.AI grandma against telephone scammersO2 now wants to remedy this with an artificial intelligence called Daisy. As the head of fraud prevention, its the job of this state-of-the-art AI granny to keep scammers away from real people for as long as possible with human-like chatter. To activate Daisy, O2 customers simply have to forward a suspicious call to the number 7726.Daisy combines different AI models that work together to first listen to the caller and convert their voice to text. It then generates responses appropriate to the characters personality via a custom single-layer large language model. These are then fed back via a custom text-to-speech model to generate a natural language response. This happens in real-time, allowing the tool to have a human-like conversation with a caller.Although human-like is a strong understatement: Daisy was trained with the help of Jim Browning, one of the most famous scambaiters on YouTube. With the persona of a lonely and seemingly somewhat bewildered older lady, she tricks the fraudsters into believing that they have found a perfect target, while in reality she beats them with their own weapons.
    0 Commentaires ·0 Parts ·118 Vue
  • How this grassroots effort could make AI voices more diverse
    www.technologyreview.com
    We are on the cusp of a voice AI boom, with tech companies such as Apple and OpenAI rolling out the next generation of artificial-intelligence-powered assistants. But the default voices for these assistants are often white AmericanBritish, if youre luckyand most definitely speak English. They represent only a tiny proportion of the many dialects and accents in the English language, which spans many regions and cultures. And if youre one of the billions of people who dont speak English, bad luck: These tools dont sound nearly as good in other languages.This is because the data that has gone into training these models is limited. In AI research, most data used to train models is extracted from the English-language internet, which reflects Anglo-American culture. But there is a massive grassroots effort underway to change this status quo and bring more transparency and diversity to what AI sounds like: Mozillas Common Voice initiative.The data set Common Voice has created over the past seven years is one of the most useful resources for people wanting to build voice AI. It has seen a massive spike in downloads, partly thanks to the current AI boom; it recently hit the 5 million mark, up from 38,500 in 2020. Creating this data set has not been easy, mainly because the data collection relies on an army of volunteers. Their numbers have also jumped, from just under 500,000 in 2020 to over 900,000 in 2024. But by giving its data away, some members of this community argue, Mozilla is encouraging volunteers to effectively do free labor for Big Tech.Since 2017, volunteers for the Common Voice project have collected a total of 31,000 hours of voice data in around 180 languages as diverse as Russian, Catalan, and Marathi. If youve used a service that uses audio AI, its likely been trained at least partly on Common Voice.Mozillas cause is a noble one. As AI is integrated increasingly into our lives and the ways we communicate, it becomes more important that the tools we interact with sound like us. The technology could break down communication barriers and help convey information in a compelling way to, for example, people who cant read. But instead, an intense focus on English risks entrenching a new colonial world order and wiping out languages entirely.It would be such an own goal if, rather than finally creating truly multimodal, multilingual, high-performance translation models and making a more multilingual world, we actually ended up forcing everybody to operate in, like, English or French, says EM Lewis-Jong, a director for Common Voice.Common Voice is open source, which means anyone can see what has gone into the data set, and users can do whatever they want with it for free. This kind of transparency is unusual in AI data governance. Most large audio data sets simply arent publicly available, and many consist of data that has been scraped from sites like YouTube, according to research conducted by a team from the University of Washington, and Carnegie Mellon andNorthwestern universities.The vast majority of language data is collected by volunteers such as Blent zden, a researcher from Turkey. Since 2020, he has been not only donating his voice but also raising awareness around the project to get more people to donate. He recently spent two full-time months correcting data and checking for typos in Turkish. For him, improving AI models is not the only motivation to do this work.Im doing it to preserve cultures, especially low-resource [languages], zden says. He tells me he has recently started collecting samples of Turkeys smaller languages, such as Circassian and Zaza.However, as I dug into the data set, I noticed that the coverage of languages and accents is very uneven. There are only 22 hours of Finnish voices from 231 people. In comparison, the data set contains 3,554 hours of English from 94,665 speakers. Some languages, such as Korean and Punjabi, are even less well represented. Even though they have tens of millions of speakers, they account for only a couple of hours of recorded data.This imbalance has emerged because data collection efforts are started from the bottom up by language communities themselves, says Lewis-Jong.Were trying to give communities what they need to create their own AI training data sets. We have a particular focus on doing this for language communities where there isnt any data, or where maybe larger tech organizations might not be that interested in creating those data sets, Lewis-Jong says. They hope that with the help of volunteers and various bits of grant funding, the Common Voice data set will have close to 200 languages by the end of the year.Common Voices permissive license means that many companies rely on itfor example, the Swedish startup Mabel AI, which builds translation tools for health-care providers. One of the first languages the company used was Ukrainian; it built a translation tool to help Ukrainian refugees interact with Swedish social services, says Karolina Sjberg, Mabel AIs founder and CEO. The team has since expanded to other languages, such as Arabic and Russian.The problem with a lot of other audio data is that it consists of people reading from books or texts. The result is very different from how people really speak, especially when they are distressed or in pain, Sjberg says. Because anyone can submit sentences to Common Voice for others to read aloud, Mozillas data set also includes sentences that are more colloquial and feel more natural, she says.Not that it is perfectly representative. The Mabel AI team soon found out that most voice data in the languages it needed was donated by younger men, which is fairly typical for the data set.The refugees that we intended to use the app with were really anything but younger men, Sjberg says. So that meant that the voice data that we needed did not quite match the voice data that we had. The team started collecting its own voice data from Ukrainian women, as well as from elderly people.Unlike other data sets, Common Voice asks participants to share their gender and details about their accent. Making sure different genders are represented is important to fight bias in AI models, says Rebecca Ryakitimbo, a Common Voice fellow who created the projects gender action plan. More diversity leads not only to better representation but also to better models. Systems that are trained on narrow and homogenous data tend to spew stereotyped and harmful results.We dont want a case where we have a chatbot that is named after a woman but does not give the same response to a woman as it would a man, she says.Ryakitimbo has collected voice data in Kiswahili in Tanzania, Kenya, and the Democratic Republic of Congo. She tells me she wanted to collect voices from a socioeconomically diverse set of Kiswahili speakers and has reached out to women young and old living in rural areas, who might not always be literate or even have access to devices.This kind of data collection is challenging. The importance of collecting AI voice data can feel abstract to many people, especially if they arent familiar with the technologies. Ryakitimbo and volunteers would approach women in settings where they felt safe to begin with, such as presentations on menstrual hygiene, and explain how the technology could, for example, help disseminate information about menstruation. For women who did not know how to read, the team read out sentences that they would repeat for the recording.The Common Voice project is bolstered by the belief that languages form a really important part of identity. We think its not just about language, but about transmitting culture and heritage and treasuring peoples particular cultural context, says Lewis-Jong. There are all kinds of idioms and cultural catchphrases that just dont translate, they add.Common Voice is the only audio data set where English doesnt dominate, says Willie Agnew, a researcher at Carnegie Mellon University who has studied audio data sets. Im very impressed with how well theyve done that and how well theyve made this data set that is actually pretty diverse, Agnew says. It feels like theyre way far ahead of almost all the other projects we looked at.I spent some time verifying the recordings of other Finnish speakers on the Common Voice platform. As their voices echoed in my study, I felt surprisingly touched. We had all gathered around the same cause: making AI data more inclusive, and making sure our culture and language was properly represented in the next generation of AI tools.But I had some big questions about what would happen to my voice if I donated it. Once it was in the data set, I would have no control about how it might be used afterwards. The tech sector isnt exactly known for giving people proper credit, and the data is available for anyones use.As much as we want it to benefit the local communities, theres a possibility that also Big Tech could make use of the same data and build something that then comes out as the commercial product, says Ryakitimbo. Though Mozilla does not share who has downloaded Common Voice, Lewis-Jong tells me Meta and Nvidia have said that they have used it.Open access to this hard-won and rare language data is not something all minority groups want, says Harry H. Jiang, a researcher at Carnegie Mellon University, who was part of the team doing audit research. For example, Indigenous groups have raised concerns.Extractivism is something that Mozilla has been thinking about a lot over the past 18 months, says Lewis-Jong. Later this year the project will work with communities to pilot alternative licenses including Nwulite Obodo Open Data License, which was created by researchers at the University of Pretoria for sharing African data sets more equitably. For example, people who want to download the data might be asked to write a request with details on how they plan to use it, and they might be allowed to license it only for certain products or for a limited time. Users might also be asked to contribute to community projects that support poverty reduction, says Lewis-Jong.Lewis-Jong says the pilot is a learning exercise to explore whether people will want data with alternative licenses, and whether they are sustainable for communities managing them. The hope is that it could lead to something resembling open source 2.0.In the end, I decided to donate my voice. I received a list of phrases to say, sat in front of my computer, and hit Record. One day, I hope, my effort will help a company or researcher build voice AI that sounds less generic, and more like me.This story has been updated.
    0 Commentaires ·0 Parts ·169 Vue
  • Google DeepMind has a new way to look inside an AIs mind
    www.technologyreview.com
    AI has led to breakthroughs in drug discovery and robotics and is in the process of entirely revolutionizing how we interact with machines and the web. The only problem is we dont know exactly how it works, or why it works so well. We have a fair idea, but the details are too complex to unpick. Thats a problem: It could lead us to deploy an AI system in a highly sensitive field like medicine without understanding that it could have critical flaws embedded in its workings.A team at Google DeepMind that studies something called mechanistic interpretability has been working on new ways to let us peer under the hood. At the end of July, it released Gemma Scope, a tool to help researchers understand what is happening when AI is generating an output. The hope is that if we have a better understanding of what is happening inside an AI model, well be able to control its outputs more effectively, leading to better AI systems in the future.I want to be able to look inside a model and see if its being deceptive, says Neel Nanda, who runs the mechanistic interpretability team at Google DeepMind. It seems like being able to read a models mind should help.Mechanistic interpretability, also known as mech interp, is a new research field that aims to understand how neural networks actually work. At the moment, very basically, we put inputs into a model in the form of a lot of data, and then we get a bunch of model weights at the end of training. These are the parameters that determine how a model makes decisions. We have some idea of whats happening between the inputs and the model weights: Essentially, the AI is finding patterns in the data and making conclusions from those patterns, but these patterns can be incredibly complex and often very hard for humans to interpret.Its like a teacher reviewing the answers to a complex math problem on a test. The studentthe AI, in this casewrote down the correct answer, but the work looks like a bunch of squiggly lines. This example assumes the AI is always getting the correct answer, but thats not always true; the AI student may have found an irrelevant pattern that its assuming is valid. For example, some current AI systems will give you the result that 9.11 is bigger than 9.8. Different methods developed in the field of mechanistic interpretability are beginning to shed a little bit of light on what may be happening, essentially making sense of the squiggly lines.A key goal of mechanistic interpretability is trying to reverse-engineer the algorithms inside these systems, says Nanda. We give the model a prompt, like Write a poem, and then it writes some rhyming lines. What is the algorithm by which it did this? Wed love to understand it.To find featuresor categories of data that represent a larger conceptin its AI model, Gemma, DeepMind ran a tool known as a sparse autoencoder on each of its layers. You can think of a sparse autoencoder as a microscope that zooms in on those layers and lets you look at their details. For example, if you prompt Gemma about a chihuahua, it will trigger the dogs feature, lighting up what the model knows about dogs. The reason it is considered sparse is that its limiting the number of neurons used, basically pushing for a more efficient and generalized representation of the data.The tricky part of sparse autoencoders is deciding how granular you want to get. Think again about the microscope. You can magnify something to an extreme degree, but it may make what youre looking at impossible for a human to interpret. But if you zoom too far out, you may be limiting what interesting things you can see and discover.DeepMinds solution was to run sparse autoencoders of different sizes, varying the number of features they want the autoencoder to find. The goal was not for DeepMinds researchers to thoroughly analyze the results on their own. Gemma and the autoencoders are open-source, so this project was aimed more at spurring interested researchers to look at what the sparse autoencoders found and hopefully make new insights into the models internal logic. Since DeepMind ran autoencoders on each layer of their model, a researcher could map the progression from input to output to a degree we havent seen before.This is really exciting for interpretability researchers, says Josh Batson, a researcher at Anthropic. If you have this model that youve open-sourced for people to study, it means that a bunch of interpretability research can now be done on the back of those sparse autoencoders. It lowers the barrier to entry to people learning from these methods.Neuronpedia, a platform for mechanistic interpretability, partnered with DeepMind in July to build a demo of Gemma Scope that you can play around with right now. In the demo, you can test out different prompts and see how the model breaks up your prompt and what activations your prompt lights up. You can also mess around with the model. For example, if you turn the feature about dogs way up and then ask the model a question about US presidents, Gemma will find some way to weave in random babble about dogs, or the model may just start barking at you.One interesting thing about sparse autoencoders is that they are unsupervised, meaning they find features on their own. That leads to surprising discoveries about how the models break down human concepts. My personal favorite feature is the cringe feature, says Joseph Bloom, science lead at Neuronpedia. It seems to appear in negative criticism of text and movies. Its just a great example of tracking things that are so human on some level.You can search for concepts on Neuronpedia and it will highlight what features are being activated on specific tokens, or words, and how strongly each one is activated. If you read the text and you see whats highlighted in green, thats when the model thinks the cringe concept is most relevant. The most active example for cringe is somebody preaching at someone else, says Bloom.Some features are proving easier to track than others. One of the most important features that you would want to find for a model is deception, says Johnny Lin, founder of Neuronpedia. Its not super easy to find: Oh, theres the feature that fires when its lying to us. From what Ive seen, it hasnt been the case that we can find deception and ban it.DeepMinds research is similar to what another AI company, Anthropic, did back in May with Golden Gate Claude. It used sparse autoencoders to find the parts of Claude, their model, that lit up when discussing the Golden Gate Bridge in San Francisco. It then amplified the activations related to the bridge to the point where Claude literally identified not as Claude, an AI model, but as the physical Golden Gate Bridge and would respond to prompts as the bridge.Although it may just seem quirky, mechanistic interpretability research may prove incredibly useful. As a tool for understanding how the model generalizes and what level of abstraction its working at, these features are really helpful, says Batson.For example, a team lead by Samuel Marks, now at Anthropic, used sparse autoencoders to find features that showed a particular model was associating certain professions with a specific gender. They then turned off these gender features to reduce bias in the model. This experiment was done on a very small model, so its unclear if the work will apply to a much larger model.Mechanistic interpretability research can also give us insights into why AI makes errors. In the case of the assertion that 9.11 is larger than 9.8, researchers from Transluce saw that the question was triggering the parts of an AI model related to Bible verses and September 11. The researchers concluded the AI could be interpreting the numbers as dates, asserting the later date, 9/11, as greater than 9/8. And in a lot of books like religious texts, section 9.11 comes after section 9.8, which may be why the AI thinks of it as greater. Once they knew why the AI made this error, the researchers tuned down the AIs activations on Bible verses and September 11, which led to the model giving the correct answer when prompted again on whether 9.11 is larger than 9.8.There are also other potential applications. Currently, a system-level prompt is built into LLMs to deal with situations like users who ask how to build a bomb. When you ask ChatGPT a question, the model is first secretly prompted by OpenAI to refrain from telling you how to make bombs or do other nefarious things. But its easy for users to jailbreak AI models with clever prompts, bypassing any restrictions.If the creators of the models are able to see where in an AI the bomb-building knowledge is, they can theoretically turn off those nodes permanently. Then even the most cleverly written prompt wouldnt elicit an answer about how to build a bomb, because the AI would literally have no information about how to build a bomb in its system.This type of granularity and precise control are easy to imagine but extremely hard to achieve with the current state of mechanistic interpretability.A limitation is the steering [influencing a model by adjusting its parameters] is just not working that well, and so when you steer to reduce violence in a model, it ends up completely lobotomizing its knowledge in martial arts. Theres a lot of refinement to be done in steering, says Lin. The knowledge of bomb making, for example, isnt just a simple on-and-off switch in an AI model. It most likely is woven into multiple parts of the model, and turning it off would probably involve hampering the AIs knowledge of chemistry. Any tinkering may have benefits but also significant trade-offs.That said, if we are able to dig deeper and peer more clearly into the mind of AI, DeepMind and others are hopeful that mechanistic interpretability could represent a plausible path to alignmentthe process of making sure AI is actually doing what we want it to do.
    0 Commentaires ·0 Parts ·167 Vue
  • Save up to $250 on every M4 Mac mini, plus get M2 early Black Friday deals from $449
    appleinsider.com
    Apple's new Mac mini is eligible for promo code savings, with every M4 and M4 Pro spec up to $250 off. Plus, grab closeout deals on M2 models, with prices starting at $449.Coupon savings are in effect on every M4 Mac mini.The early Black Friday deals on Apple's M4 Mac mini are thanks to promo code APINSIDER at Apple Authorized Reseller Adorama. In business since 1974, Adorama has issued exclusive discounts of up to $250 off every model, including M4 Pro and 10 Gigabit Ethernet configurations. Continue Reading on AppleInsider | Discuss on our Forums
    0 Commentaires ·0 Parts ·119 Vue