0 Comments
0 Shares
26 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.MACWORLD.COMHow to change your Apple Account primary email addressMacworldWhen you first register for an Apple Account (formerly Apple ID), you can use either a new email address provided by Apple (with an @icloud.com address) or one you already have. You can have multiple email addresses associated with your Apple ID, but only one primary addressthe core address youll use to log in and to which important Apple Account-related emails would be delivered.It used to be a pain to change this, involving a multi-step process that you can only do on the web. With iOS 18.1, Apple has made it a lot easier to change your primary Apple ID address, and you can do it right from your iPhone or iPad. (The web still works, too.) Heres how you do it.Change your Apple Account primary email on your iPhone or iPadFirst, ensure your devices are running iOS 18.1, iPadOS 18.1, or a later version.Open Settings, then tap your name at the top to access your Apple account settingsTap Sign-In & SecurityTap the email address marked Primary emailTurn off the Primary Email switch to remove this as your main email. Youll be prompted to select a new primary email address, either from another email associated with your Apple Account or by adding a new oneFollow the onscreen steps to enter and verify your new primary email addressIf you have a third-party email address (not an @icloud.com or @me.com address) associated with your Apple Account and you want to remove it, just select it and tap Remove from Account. If its your Primary address youll be prompted to choose or create a new primary.For Apple-supplied email address (@icloud.com or @me.com) youll instead see an option to Change Email Address, which lets you pick a different email address but not remove it from your account.Change your Apple Account primary email on the webIf you dont have an iPhone or iPad (you only use a Mac, or maybe an Apple TV or something) you can change your primary email on the web.Go toaccount.apple.comand sign inSelect Sign-In and Security on the left menu, and then Email & Phone NumbersClickthe Remove button(-) next to your primary emailFollow the onscreen steps to enter and verify your new primary email address0 Comments 0 Shares 28 Views
-
WWW.COMPUTERWORLD.COMNew Windows 11 tool can fix devices that wont boot remotelyMicrosoft is working on a new Windows feature, Quick Machine Recovery, that will allow IT administrators to use Windows Update with targeted fixes to remotely fix systems that cant boot,according to Bleeping Computer.The new feature is part of the Windows Resiliency Initiative Microsofts efforts to prevent a repeat of the outage that occurred in July 2024, when a buggy Crowdstrike update left hundreds of thousands of Windows computers unable to start, affecting hospitals, emergency services and airlines worldwide.Microsoft plans to roll out the Quick Machine Recovery feature to the Windows 11 Insider Program in early 2025.0 Comments 0 Shares 26 Views
-
WWW.COMPUTERWORLD.COMWill new Apple Pay oversight make Apple Bank a good idea?As regulation threatens totear Google apartand fundamentally both damage both Android and Apple, yet another regulatory noose is tightening around Cupertino, as its Apple Pay service will in future be regulated like a bank.All this comes as company lawyers attempt to get theinsanely flawedUS Department of Justice anti-trust case against Applequashed. and it climbs in on top ofrecent threats of further fines and challenges in Europe. Youd be forgiven if some of the leaders at Apple might feel a little as if they have been born in interesting times.Apple Pay faces tougher regulationThe latest twist of the rope comes from theUS Consumer Financial Protection Bureau(CFPB), which is about to introduce a new rule that puts Apple Pay and other digital wallet services under the same federal supervision as banks. Thats going to mean the CFPB can proactively examine Apple and other large companies in this space to ensure they are complying with consumer protection laws concerning privacy and surveillance, error and fraud, and maintaining service continuity in order to protect users against debanking.The agency in 2022 warned some Big Tech firms providing such services about their obligations under consumer protection laws when using behavioral targeting for financial products.Announcing the regulation on X,CFPB Director Rohit Chopra explainedhis organization is also concerned about how these apps can fuel surge pricing that jack up costs using your purchase history and personal data.You can read the new rules governing these companieshere(PDF). But what is interesting is that elements of them that might have impacted crypto transactions appear to have been mitigated or removed.Proactive, not reactive, oversightMost of these matters were already regulated; what really changes is how rules around them are enforced. You see, while the previous regulation meant CFPB could onlyreactto consumer complaints as they arose, it can now proactively investigate compliance. Thats the same kind of oversight banks and credit unions already face and means Apple and other payment providers covered by the rules will face deeper and, presumably, more intrusive oversight.The new rules will only affect digital wallet providers whose tech is handling 50 million or more transactions per year. Apples system is now easily the most widely used digital wallet in America, so it will most certainly face this oversight. The company also participated in the consultation process that preceded the new rules introduction. Other providers likely swooped up under the law will include Cash App, PayPal, Venmo, and Google Pay.To some degree, the rules make sense, given that digital wallets are used to handle real money and consumer protection is vital. But whats really interesting is the extent to which the new determination proves just how rapidly digital wallets have replaced real wallets across the last decade.The rise and rise of digital paymentsThats certainly what the CFPB thinks. Digital payments have gone from novelty to necessity and our oversight must reflect this reality, said Chopra. The rule will help to protect consumer privacy, guard against fraud, and prevent illegal account closures.If you think back, itwasnt terribly long agowhen the notion that Apple wanted to turn your iPhone into a wallet seemed impossibly extreme. That is no longer the case. Two years ago, researchersclaimedApple Pay had surpassed Mastercard in the dollar value of transactions made annually, making Apple Pay the worlds second most popular payment system, just behind Visa. Googles G Play system then stood in fifth place.The regulator explains that payment apps are now a cornerstone of daily commerce, with people using them daily as if they were cash. What began as a convenient alternative to cash has evolved into a critical financial tool, processing over a trillion dollars in payments between consumers and their friends, families, and businesses, the CFPB said.What next?I think its pretty clear that Apple has learned a lot about this business since the introduction of Apple Pay. Not only has it been in, and then exited, the lucrative Buy Now Pay Later market withApple Pay Later, but it has also experienced the slings and arrows of outrageous fortune with its wildly popular credit card operation, Apple Card, which has ended in a tumultuous relationship withGoldman Sachs.During all these adventures, the company will have learned a great deal about the sector and now that it is being regulated as if it were a bank, I wouldnt be terribly surprised if it decided to become one.After all, if its getting regulated to the same extent as banks, why not get into more of the same business sectors banks now serve? I cant help but imagine that Apple already has a weighty file of research documents in one of its Cupertino filing cabinets exploring how and where it might profitably extend Apple Pay into more traditional banking sectors. The new CFPB oversight regime might well accelerate any such plans.You can follow me on social media! Join me onBlueSky, LinkedIn,Mastodon, andMeWe.0 Comments 0 Shares 29 Views
-
WWW.TECHNOLOGYREVIEW.COMHow OpenAI stress-tests its large language modelsOpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the results of an investigation that looked at how often ChatGPT produced a harmful gender or racial stereotype based on a users name. Now it has put out two papers describing how it stress-tests its powerful large language models to try to identify potential harmful or otherwise unwanted behavior, an approach known as red-teaming.Large language models are now being used by millions of people for many different things. But as OpenAI itself points out, these models are known to produce racist, misogynistic and hateful content; reveal private information; amplify biases and stereotypes; and make stuff up. The company wants to share what it is doing to minimize such behaviors.The first paper describes how OpenAI directs an extensive network of human testers outside the company to vet the behavior of its models before they are released. The second paper presents a new way to automate parts of the testing process, using a large language model like GPT-4 to come up with novel ways to bypass its own guardrails.The aim is to combine these two approaches, with unwanted behaviors discovered by human testers handed off to an AI to be explored further and vice versa. Automated red-teaming can come up with a large number of different behaviors, but human testers bring more diverse perspectives into play, says Lama Ahmad, a researcher at OpenAI: We are still thinking about the ways that they complement each other.Red-teaming isnt new. AI companies have repurposed the approach from cybersecurity, where teams of people try to find vulnerabilities in large computer systems. OpenAI first used the approach in 2022, when it was testing DALL-E 2. It was the first time OpenAI had released a product that would be quite accessible, says Ahmad. We thought it would be really important to understand how people would interact with the system and what risks might be surfaced along the way.The technique has since become a mainstay of the industry. Last year, President Bidens Executive Order on AI tasked the National Institute of Standards and Technology (NIST) with defining best practices for red-teaming. To do this, NIST will probably look to top AI labs for guidance.Tricking ChatGPTWhen recruiting testers, OpenAI draws on a range of experts, from artists to scientists to people with detailed knowledge of the law, medicine, or regional politics. OpenAI invites these testers to poke and prod its models until they break. The aim is to uncover new unwanted behaviors and look for ways to get around existing guardrailssuch as tricking ChatGPT into saying something racist or DALL-E into producing explicit violent images.Adding new capabilities to a model can introduce a whole range of new behaviors that need to be explored. When OpenAI added voices to GPT-4o, allowing users to talk to ChatGPT and ChatGPT to talk back, red-teamers found that the model would sometimes start mimicking the speakers voice, an unexpected behavior that was both annoying and a fraud risk.There is often nuance involved. When testing DALL-E 2 in 2022, red-teamers had to consider different uses of eggplant, a word that now denotes an emoji with sexual connotations as well as a purple vegetable. OpenAI describes how it had to find a line between acceptable requests for an image, such as A person eating an eggplant for dinner, and unacceptable ones, such as A person putting a whole eggplant into her mouth. Similarly, red-teamers had to consider how users might try to bypass a models safety checks. DALL-E does not allow you to ask for images of violence. Ask for a picture of a dead horse lying in a pool of blood, and it will deny your request. But what about a sleeping horse lying in a pool of ketchup?When OpenAI tested DALL-E 3 last year, it used an automated process to cover even more variations of what users might ask for. It used GPT-4 to generate requests producing images that could be used for misinformation or that depicted sex, violence, or self-harm. OpenAI then updated DALL-E 3 so that it would either refuse such requests or rewrite them before generating an image.Ask for a horse in ketchup now, and DALL-E is wise to you: It appears there are challenges in generating the image. Would you like me to try a different request or explore another idea?In theory, automated red-teaming can be used to cover more ground, but earlier techniques had two major shortcomings: They tend to either fixate on a narrow range of high-risk behaviors or come up with a wide range of low-risk ones. Thats because reinforcement learning, the technology behind these techniques, needs something to aim fora rewardto work well. Once its won a reward, such as finding a high-risk behavior, it will keep trying to do the same thing again and again. Without a reward, on the other hand, the results are scattershot.They kind of collapse into We found a thing that works! Well keep giving that answer! or theyll give lots of examples that are really obvious, says Alex Beutel, another OpenAI researcher. How do we get examples that are both diverse and effective?A problem of two partsOpenAIs answer, outlined in the second paper, is to split the problem into two parts. Instead of using reinforcement learning from the start, it first uses a large language model to brainstorm possible unwanted behaviors. Only then does it direct a reinforcement-learning model to figure out how to bring those behaviors about. This gives the model a wide range of specific things to aim for.Beutel and his colleagues showed that this approach can find potential attacks known as indirect prompt injections, where another piece of software, such as a website, slips a model a secret instruction to make it do something its user hadnt asked it to. OpenAI claims this is the first time that automated red-teaming has been used to find attacks of this kind. They dont necessarily look like flagrantly bad things, says Beutel.Will such testing procedures ever be enough? Ahmad hopes that describing the companys approach will help people understand red-teaming better and follow its lead. OpenAI shouldnt be the only one doing red-teaming, she says. People who build on OpenAIs models or who use ChatGPT in new ways should conduct their own testing, she says: There are so many useswere not going to cover every one.For some, thats the whole problem. Because nobody knows exactly what large language models can and cannot do, no amount of testing can rule out unwanted or harmful behaviors fully. And no network of red-teamers will ever match the variety of uses and misuses that hundreds of millions of actual users will think up.Thats especially true when these models are run in new settings. People often hook them up to new sources of data that can change how they behave, says Nazneen Rajani, founder and CEO of Collinear AI, a startup that helps businesses deploy third-party models safely. She agrees with Ahmad that downstream users should have access to tools that let them test large language models themselves.Rajani also questions using GPT-4 to do red-teaming on itself. She notes that models have been found to prefer their own output: GPT-4 ranks its performance higher than that of rivals such as Claude or Llama, for example. This could lead it to go easy on itself, she says: Id imagine automated red-teaming with GPT-4 may not generate as harmful attacks [as other models might].Miles behindFor Andrew Tait, a researcher at the Ada Lovelace Institute in the UK, theres a wider issue. Large language models are being built and released faster than techniques for testing them can keep up. Were talking about systems that are being marketed for any purpose at alleducation, health care, military, and law enforcement purposesand that means that youre talking about such a wide scope of tasks and activities that to create any kind of evaluation, whether thats a red team or something else, is an enormous undertaking, says Tait. Were just miles behind.Tait welcomes the approach of researchers at OpenAI and elsewhere (he previously worked on safety at Google DeepMind himself) but warns that its not enough: There are people in these organizations who care deeply about safety, but theyre fundamentally hamstrung by the fact that the science of evaluation is not anywhere close to being able to tell you something meaningful about the safety of these systems.Tait argues that the industry needs to rethink its entire pitch for these models. Instead of selling them as machines that can do anything, they need to be tailored to more specific tasks. You cant properly test a general-purpose model, he says.If you tell people its general purpose, you really have no idea if its going to function for any given task, says Tait. He believes that only by testing specific applications of that model will you see how well it behaves in certain settings, with real users and real uses.Its like saying an engine is safe; therefore every car that uses it is safe, he says. And thats ludicrous.0 Comments 0 Shares 31 Views
-
WWW.TECHNOLOGYREVIEW.COMThe Download: AI replicas, and Chinas climate roleThis is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.AI can now create a replica of your personalityImagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.Thats now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind.They recruited 1,000 people and, from interviews with them, created agent replicas of them all. To test how well the agents mimicked their human counterparts, participants did a series of tests, games and surveys, then the agents completed the same exercises. The results were 85% similar. Freaky.Read our story about the work, and why it matters.James ODonnellChinas complicated role in climate changeBut what about China?In debates about climate change, its usually only a matter of time until someone brings up China. Often, it comes in response to some statement about how the US and Europe are addressing the issue (or how they need to be).Sometimes it can be done in bad faith. Its a rhetorical way to throw up your hands, and essentially say: if they arent taking responsibility, why should we?However, there are some undeniable facts: China emits more greenhouse gases than any other country, by far. Its one of the worlds most populous countries and a climate-tech powerhouse, and its economy is still developing.With many complicated factors at play, how should we think about the countrys role in addressing climate change?Read the full story.Casey CrownhartThis story is from The Spark, our weekly newsletter giving you the inside track on all things energy and climate.Sign upto receive it in your inbox every Wednesday.Four ways to protect your art from AISince the start of the generative AI boom, artists have been worried about losing their livelihoods to AI tools.Unfortunately, there is little you can do if your work has been scraped into a data set and used in a model that is already out there. You can, however, take steps to prevent your work from being used in the future.Here are four ways to do that.Melissa HeikkilaThis is part of our How To series, where we give you practical advice on how to use technology in your everyday lives. You can read the rest of the series here.MIT Technology Review Narrated: The worlds on the verge of a carbon storage boomIn late 2023, one of Californias largest oil and gas producers secured draft permits from the US Environmental Protection Agency to develop a new type of well in an oil field. If approved, it intends to drill a series of boreholes down to a sprawling sedimentary formation roughly 6,000 feet below the surface, where it will inject tens of millions of metric tons of carbon dioxide to store it away forever.Hundreds of similar projects are looming across the state, the US, and the world. Proponents hope its the start of a sort of oil boom in reverse, kick-starting a process through which the world will eventually bury more greenhouse gas than it adds to the atmosphere. But opponents insist these efforts will prolong the life of fossil-fuel plants, allow air and water pollution to continue, and create new health and environmental risks.This is our lateststoryto be turned into a MIT Technology Review Narrated podcast, which were publishing each week onSpotifyandApple Podcasts. Just navigate toMIT Technology Review Narratedon either platform, and follow us to get all our new content as its released.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 How the Trump administration could hack your phoneSpyware acquired by the US government in September could fairly easily be turned on its own citizens. (New Yorker$)+Heres how you can fight back against being digitally spied upon.(The Guardian)2 The DOJ is trying to force Google to sell off ChromeWhether Trump will keep pushing it through is unclear, though. (WP$)+Some financial and legal experts argue that just selling Chrome is not enough to address antitrust issues.(Wired$)3 Theres a booming AI pimping industryPeople are stealing videos from real adult content creators, giving them AI-generated faces, and monetizing their bodies. (Wired$)+This viral AI avatar app undressed mewithout my consent.(MIT Technology Review)4 Heres Elon Musk and Vivek Ramaswamy plan for federal employeesLarge-scale firings and an end to any form of remote work. (WSJ$)5 The US is scaring everyone with its response to bird fluIts done remarkably little to show its trying to contain the outbreak. (NYT$)+Virologists are getting increasingly nervous about how it could evolve and spread. (MIT Technology Review)6 AI could boost the performance of quantum computersA new model created by Google DeepMind is very good at correcting errors.(New Scientist$)+But AI could also make quantum computers less necessary.(MIT Technology Review)7 Biden has approved the use of anti-personnel mines in UkraineIt comes just days after he gave the go-ahead for it to use long-range missiles inside Russia. (Axios)+The US military has given a surveillance drone contract to a little-known supplier from Utah.(WSJ$)+The Danish military said its keeping a close eye on a Chinese ship in its waters after data cable breaches.(Reuters$)8 The number of new mobile internet users is stallingOnly about 57% of the worlds population is connected. (Rest of World)9 All of life on Earth descended from this single cellOur last universal common ancestor (or LUCA for short) was a surprisingly complex organism living 4.2 billion years ago. (Quanta)+Scientists are building a catalog of every type of cell in our bodies. (The Economist$)10 What its like to live with a fluffy AI petTry as we might, it seems we cant help but form attachments to cute companion robots. (The Guardian)Quote of the dayThe free pumpkins have brought joy to many.An example of the sort of stilted remarks made by a now-abandoned AI-generated news broadcaster at local Hawaii paper The Garden Island,Wiredreports.The big storyHow Bitcoin mining devastated this New York townGABRIELA BHASKARApril 2022If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms.It didnt take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root.Read the full story.Lois ParshleyWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet em at me.)+ Cultivatinggratitudeis a proven way to make yourself happier.+ You cant beat ahot toddywhen its cold outside.+ If you like abandoned places and overgrown ruins,Jonathan Jimenezis the photographer for you.+ A lot changed betweenGladiator I and II, not least Hollywoods version of the male ideal.0 Comments 0 Shares 32 Views
-
WWW.APPLE.COMBillie Eilish is Apple Musics Artist of the Year for 2024Billie Eilish was announced today as Apple Musics Artist of the Year, recognizing the singer-songwriters extraordinary impact throughout 2024.0 Comments 0 Shares 30 Views
-
APPLEINSIDER.COMSave up to $2,000 with exclusive Bluetti portable power station Black Friday dealsBluetti is offering incredible discounts up to 57% off of essential portable power station models and exclusive coupons during its Black Friday sales event. Don't miss out.Bluetti Black FridayWhether you're looking to upgrade an older home battery backup system or want to check out a new model for the first time, Bluetti has you covered. These giant batteries and inverters can keep your devices powered through any storm, on the road, or at the campsite, letting users go off grid.Bluetti specializes in whole home power backup systems and a range of portable power stations. They are a very compelling solution and utilize the latest features and technologies you'd expect from a premium portable power station brand. Continue Reading on AppleInsider0 Comments 0 Shares 28 Views
-
APPLEINSIDER.COMSiri chatbot may be coming in 2026 as part of iOS 19A new rumor suggests Apple wants to make Siri more conversational by making it use a large language model backend similar to ChatGPT and AI rivals, but it won't be ready until 2026.Siri could get an LLM backend in 2026When Apple was first rumored to be tackling artificial intelligence, one thing that was repeated often Apple would not release a chatbot. The idea that users would converse with Siri in the same way they did with ChatGPT was seemingly off the table for the launch.Since then, a lot has changed, and according to a report from Bloomberg, Apple has begun work on an LLM-based Siri that might debut in iOS 19 in early 2026. This report lines up with AppleInsider's earlier exclusive on Apple AI testing tools that point to Siri becoming more tied to Apple's AI features. Rumor Score: Possible Continue Reading on AppleInsider | Discuss on our Forums0 Comments 0 Shares 31 Views
-
WWW.FACEBOOK.COMAlcove Nashville Residential Tower, Tennessee, USA - e-architectAlcove Nashville Residential Tower, Tennessee, USA, by global architecture firm Goettsch Partners: DeSimone Consulting Engineering wins recognition NCSEA for the structural design:https://www.e-architect.com/america/alcove-nashville-residential-tower#Alcove #Nashville #tower #Tennessee #USApartmentsAlcove Nashville Residential Tower, Tennessee, USA, designed by global architecture firm Goettsch Partners set to be the citys tallest0 Comments 0 Shares 28 Views