• 9TO5MAC.COM
    Apple says Siri data has never been used for marketing profiles or sold to anyone for any purpose
    Last week, Apple agreed to pay $95 million to settle a lawsuit that alleged unlawful and intentional recording of Siri interactions. Apples settlement has led to a wave of conspiracy theories claiming that Siri is listening to you for targeted advertising, but the company says those claims are completely unfounded. Siris privacy controversyAs a refresher, the lawsuit stemmed from a 2019 report in the Guardian that revealed Apples use of contractors to grade Siri interactions. The whistleblower in the story alleged that those contractors would regularly hear private interactions from users as part of their work providing quality control for Siri. At the time, Apple quickly responded to the allegations by saying that less than 1% of daily Siri activations were used for grading and that those activations were typically only a few seconds long. The interactions were also bound by Apples strict confidentiality agreement and were not associated with a users Apple ID.Apple also subsequently announced several changes to Siris privacy protections in a post on Apple Newsroom. The big change was that, by default, Apple no longer retained recordings of Siri interactions. Instead, users could opt-in to help Siri improve by learning from the audio samples of their requests. Apple also said that only Apple employees would be allowed to listen to audio samples of Siri interactions, not third-party contractors and any recording that was determined to be an inadvertent trigger of Siri would be swiftly deleted. Is your iPhone listening to you to show you ads? Nope.Fast forward to 2025, and Apple agreed last week to settle that 2019 lawsuit with a $95 million payout to users. In a statement to 9to5Mac today, Apple said it settled the case so it can move forward from concerns about third-party grading that we already addressed in 2019. The company says that Siri data has never been used to build marketing profiles, and it has never been sold to anyone for any purpose.Here is the full statement from an Apple spokesperson: Siri has been engineered to protect user privacy from the beginning. Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose. Apple settled this case to avoid additional litigation so we can move forward from concerns about third-party grading that we already addressed in 2019. We use Siri data to improve Siri, and we are constantly developing technologies to make Siri even more private.Last weeks news spawned a number of unfounded conspiracy theories using Apples settlement as evidence that your iPhone is always listening to you and spying on you for means of targeted advertising. Apple tells me this is absolutely not the case, and what you share with Siri is never shared with advertisers.Apple says it repeatedly denied allegations throughout the lawsuit that Siri recordings were used to target advertisements, and no evidence was presented to suggest otherwise. In fact, Siri interactions are tied to a random identifier that lets Apple keep track of data during processing. Those interactions are not tied to your Apple Account, phone number, or any other identifying information. After six months, that request history is also unlinked from that random identifier. All of these details (and more) are emphasized on Apples website on a webpage dedicated to Siri and Dictation privacy.Additionally, you can manually review and delete Siri transcripts directly in Settings. Just go to the Settings app and look for the Siri & Dictation History option. Some Siri requests are also handled entirely on-device. For example, if you ask Siri to read unread messages, it does so by simply instructing your iPhone to read your messages aloud. The content of the message is not sent to Apple servers.For Apple Intelligence features, Apple also emphasizes its use of Private Cloud Compute. ApplesPrivate Cloud Compute infrastructureis built on its own Apple Silicon chips and is open to third-party researchers to ensure privacy protections. 9to5Macs Take All this to say, the headlines making the rounds suggesting that this lawsuit is evidence your phone is always listening to you are nothing but unfounded conspiracy theories. In fact, it is quite literally impossible that your interactions with Siri are being used for targeted advertising with the privacy protections that Apple has put into place.This, of course, doesnt excuse Apples reactive rather than proactive approach to the situation that first arose in 2019. Apple shouldve had more privacy protections in place before then, and it shouldnt have taken a whistleblower for it to respond. The system shouldve been opt-in from the start. Nonetheless, Apple has continued to double down on Siris privacy protections since then.Follow Chance:Threads,Bluesky,Instagram, andMastodon.Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comments 0 Shares 17 Views
  • THEHACKERNEWS.COM
    India Proposes Digital Data Rules with Tough Penalties and Cybersecurity Requirements
    The Indian government has published a draft version of the Digital Personal Data Protection (DPDP) Rules for public consultation."Data fiduciaries must provide clear and accessible information about how personal data is processed, enabling informed consent," India's Press Information Bureau (PIB) said in a statement released Sunday."Citizens are empowered with rights to demand data erasure, appoint digital nominees, and access user-friendly mechanisms to manage their data."The rules, which seek to operationalize the Digital Personal Data Protection Act, 2023, also give citizens greater control over their data, providing them with options for giving informed consent to processing their information, as well as the right to erase with digital platforms and address grievances.Companies operating in India are further required to implement security measures, such as encryption, access control, and data backups, to safeguard personal data, and ensure its confidentiality, integrity, and availability.Some of the other notable provisions of the DPDP Act that data fiduciaries are expected to comply are listed below -Implement mechanisms for detecting and addressing breaches and maintenance of logsIn the event of a data breach, provide detailed information about the sequence of events that led to the incident, actions taken to mitigate the threat, and the identity of the individual(s), if known, within 72 hours (or more, if permitted) to the Data Protection Board (DPB)Delete personal data no longer needed after a three-year period and notify individuals 48 hours before erasing such informationClearly display on their websites/apps the contact details of a designated Data Protection Officer (DPO) who is responsible for addressing any questions regarding users' processing of personal dataObtain verifiable consent from parents or legal guardians prior to processing the personal data of children under 18 or persons with disabilities (exemptions include healthcare professionals, educational institutions, and childcare providers, but only restricted to specific activities like health services, educational activities, safety monitoring, and transportation tracking)Conduct a Data Protection Impact Assessment (DPIA) and a comprehensive audit once every year, and report the results to DPB (limited to only data fiduciaries deemed "significant")Adhere to requirements the federal government sets when it comes to cross-border data transfers (the exact categories of personal data that must remain within India's borders will be determined by a specialized committee)The draft rules have also proposed certain safeguards for citizens when their data is being processed by federal and state government agencies, requiring that such processing happen in a manner that's lawful, transparent, and "in line with legal andpolicy standards."Organizations that misuse or fail to safeguard individuals' digital data or notify the DPB of a security breach can face monetary penalties of up to 250 crore (nearly $30 million).The Ministry of Electronics and Information Technology (MeitY) is soliciting feedback from the public on the draft regulations until February 18, 2025. It also said the submissions will not be disclosed to any party.The DPDP Act was formally passed in August 2023 after being reworked several times since 2018. The data protection regulation came forth in the wake of a 2017 ruling from India's top court which reaffirmed the right to privacy as a fundamental right under the Constitution of India.The development comes over a month after the Department of Telecommunications issued the Telecommunications (Telecom Cyber Security) Rules, 2024, under the Telecommunications Act, 2023, to secure communication networks and impose stringent data breach disclosure guidelines.According to the new rules, a telecom entity must report any security incident affecting its network or services to the federal government within six hours of becoming aware of it, with the affected company also sharing additional relevant information within 24 hours.In addition, telecommunication companies are required to appoint a Chief Telecommunication Security Officer (CTSO) who must be an Indian citizen and a resident of India, and share traffic data excluding message content with the federal government in a specified format for "protecting and ensuring telecom cybersecurity."However, the Internet Freedom Foundation (IFF) said the "overbroad phrasing" and the removal of the definition of "traffic data" from the draft could open the door for misuse.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
    0 Comments 0 Shares 31 Views
  • SCREENCRUSH.COM
    Sonic the Hedgehog Franchise Passes $1 Billion at Box Office
    The Sonic the Hedgehog movie franchise has crossed $1 billion at the global box office.The latest entry into the series Sonic the Hedgehog 3, which is inspired by the Sega video game franchise of the same name, has so far grossed $336.5 million worldwide since its theatrical release on December 20, 2024, bringing the series' total earnings up to $1.06 billion since Sonic the Hedgehog 2and its 2020 predecessor raked in $405 million and $320 million respectively.This major milestone has cemented Sonic the Hedgehog as one of the most profitable video game movie adaptations in Hollywood history, alongside the likes of The Super Mario Bros. Movie, Pokemon, and the Resident Evil films.Due to the success of Sonic the Hedgehog 3, Paramount Pictures is working on a fourth film, with the studio aiming for a Spring 2027 release.Sonic the Hedgehog 3 follows the titular hero (voiced by Ben Schwartz) as he, Tails (Colleen O'Shaughnessey) and his human accomplice Tom Wachowski (James Marsden) attempt to stop Shadow the Hedgehog (Keanu Reeves) from destroying the world.Sonic the Hedgehog 3Paramountloading...READ MORE: Sonic the Hedgehog 3, which also stars Krysten Ritter and Idris Elba - also sees the return of Jim Carreys baddie Dr. Ivo Robotnik.Even after the 62-year-old actor previously vowed to retire from Hollywood, director Jeff Fowler always knew Carrey would return for Sonic the Hedgehog 3 due to the movies fun concept.The filmmaker explained to Variety: In my heart of hearts, I felt like if we offered Jim a fun concept and if we dangled just the right carrot, he'd come back. He loves entertaining young audiences.Carrey revealed when promoting Sonic the Hedgehog 2 in 2022 that he was fairly serious about his plans to retire from acting, but admitted that he returned for the recent flick as he needed some cash after blowing through his savings.The Mask star explained: I came back to this universe because, first of all, I get to play a genius, which is a bit of a stretch. And, you know, its just ... I bought a lot of stuff and I need the money, frankly.Get our free mobile appThe Most Anticipated Movies of 202520 movies we cant wait to see in 25.
    0 Comments 0 Shares 31 Views
  • SCREENCRUSH.COM
    20 Once-Beloved Mall Stores That Faded Away
    It seems like every day theres a new article online about a ghost mall somewhere; a former shopping mecca decimated by changing consumer habits and demographics andleft to decay into somethingthat looks like the settingof a zombie film.(Side note: How long before someone makesGhost Mall, a winking low-budget horror movie about an abandoned mall that was built on an unmarked graveyard and isactually haunted? Or a film calledRetail Apocalypse, about the survivors of a cataclysmic war holed up inside an abandonedNeiman Marcus?Wait, why am I giving these great ideas away for free?!? Im an idiot.)Kids these days (shakes fist) have no idea what they missed out on when these ghosts malls were alive and well. Ive seen things you young people wouldnt believe. Hot Topics on fire off the shoulder of the arcade. I watched Spencers Gifts glitter in the dark near the Searsgate.All those moments will be lost in time, like tears in rain.Ayounger generation might think it it is time for these malls to die. They may be right. But those of up who grew up haunting spaces that are now considered ghost malls have intensely fond memories about them. In the 1980s and 90s, malls were where you went to hang out with friends, to socialize, and to buy stuff you didnt need but desperately wanted. (I take it back, I definitely needed thoseZ Cavariccis pants.)In this piece, we look back at 20 of the now-defunct (oralmost-entirelydefunct) mall chains wed love to visit one more time. Grab yourself an Aunt Annie's pretzel and an Orange Julius and prepare yourself for a trip down memory lane ... or maybe the memoryfood court, in this case.20 Vintage Mall Stores We Wish We Could Visit One More TimeIf you spent a lot of times in malls in the 80s and 90s, you definitely visited some these classic and now non-existent (or barely existent) stores.READ MORE: Once-Beloved Fast Food Items That No Longer ExistGet our free mobile appForgotten 90s Movies You Need to SeeThese movies werent hits. Theyre not considered 90s classics. But more people should watch them.
    0 Comments 0 Shares 31 Views
  • WEWORKREMOTELY.COM
    Publitas.com B.V.: Performance Marketer ( SaaS / Remote / Europe )
    Time zones: SBT (UTC +11), GMT (UTC +0), CET (UTC +1), EET (UTC +2), MSK (UTC +3)Ready to have an impact?Publitas empowers businesses to deliver paperless discovery-commerce experiences that engage, inspire, and have the potential to reach more customers than was ever possible. We combine a healthy dose of persistence with the will to embrace crazy ideas and push new boundaries. Guided by a desire to do things better, we want to improve the world around us.Note from the hiring manager: We seek a data-driven and highly analytical Performance Marketer to join our dynamic Marketing team in Publitas. The ideal candidate will be passionate about digital marketing strategies to drive measurable results, increase customer acquisition, and maximise ROI across various marketing channels. This role involves combining strategic planning, creative marketing, and deep analysis to optimise campaign performance and achieve business goals.Ready to have an impact with us? Start the application process by filling out the screening questionnaire to see if Publitas is a good fit for you.Take ownership by:Developing and executing comprehensive performance marketing strategies to meet or exceed key performance indicators (KPIs) and business objectives.Managing and optimising paid search, social media, display, and retargeting campaigns across platforms such as Google Ads, LinkedIn, Facebook, and more.Conducting A/B testing and continuous campaign analysis to identify optimisation opportunities for improving campaign performance and scaling successful initiatives.Collaborating with the content marketer as well as the graphic designer to create high-impact advertisements, landing pages, and marketing collateral that resonates with our target B2B audience.Utilising analytics and marketing automation tools to track campaign performance, analyse customer behaviour, and provide actionable insights for optimisation.Stay abreast of industry trends, tools, and best practices in performance marketing to drive innovation and maintain a competitive edge in the market.Work closely with sales and product teams to align marketing strategies with business goals and ensure a cohesive customer journey from initial engagement to conversion.Job requirementsYou have proven experience in performance marketing, specifically within a B2B SaaS environment, and prior experience targeting Publitas' specific ICP in the retail sector and successfully generating SQLs.You have strong analytical skills with the ability to interpret data, identify trends, and make data-driven decisions.You are proficient in digital marketing tools and platforms, including Google Analytics, CRM software, and marketing automation tools.You have in-depth knowledge of digital marketing channels, including PPC, paid social media, display advertising, email marketing, and affiliate marketing.You have excellent communication and collaboration skills to work effectively across teams and with stakeholders at all levels.You are a creative thinker with a test-and-learn mentality to drive continuous improvement in marketing efforts.What we provide to help you achieve results:We offer a competitive salary. Salaries are assessed based on your relevant experience, level of seniority, and location.Twenty-five vacation days per year and your National Holidays off.Work from anywhere you desire.A monthly shared office space/co-working allowance.A one-time home office setup stipend.A top-of-the-line MacBook.Monthly wellness allowance to stay healthy while working remotely.Annual retreats in some of the greatest cities in the world.Free books in Kindle and Audible store.We'll challenge and support you to get the most out of your potential through personal 1-1 sessions.Please also have a read through our Publitas is proud to be an Equal Opportunity Employer. We strive to create an inclusive environment that empowers our employees all over the world. We want you to feel welcome, respected, and valued for who you are it's our differences that make us stronger! We celebrate diversity and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. Publitas welcomes all, we invite you to apply and join us!
    0 Comments 0 Shares 31 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Download: our 10 Breakthrough Technologies for 2025
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Introducing: MIT Technology Reviews 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more. Weve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. Its hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. Its all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as:+ How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs. + What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird dataand to shed much-needed light on their migration habits. Read the full story.+ How poop could help feed the planetyes, really. Read the full story. Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if youre interested in previous discussions about topics ranging from mixed reality tech to gene editing to AIs climate impact, check out some of the highlights from the past years events. This international surveillance project aims to protect wheat from deadly diseases For as long as theres been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world. But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change.An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the worlds calories. Read the full story. Shaoni Bhattacharya The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge)2 Uber and Lyft are racing to catch up with their robotaxi rivals After abandoning their own self-driving projects years ago. (WSJ $)+ Chinas Pony.ai is gearing up to expand to Hong Kong. (Reuters)3 Elon Musk is going after NASAHes largely veered away from criticising the space agency publiclyuntil now. (Wired $)+ SpaceXs Starship rocket has a legion of scientist fans. (The Guardian)+ Whats next for NASAs giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAI Featuring three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (TechCrunch)5 The dangerous allure of TikTokMigrants online portrayal of their experiences in America arent always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketing And AI is only a part of it. (Economist $)+ AIs search for more energy is growing more urgent. (MIT Technology Review)7 The messy ethics of writing religious sermons using AISkeptics arent convinced the technology should be used to channel spirituality. (NYT $) 8 How a wildlife app became an invaluable wildfire tracker Watch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review)9 Computer scientists just love oracles Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)10 Pet tech is booming But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogsand their owners. (MIT Technology Review) Quote of the day The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI? Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies AI claims are overblown. The big story Broadband funding for Native communities could finally connect some of Americas most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by. The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ Its not just you, some people really do look like their dogs!+ Im not sure if this is actually the worlds healthiest meal, but it sure looks tasty.+ Ah, the old bitten by a rabid fox chestnut.
    0 Comments 0 Shares 33 Views
  • WWW.TECHNOLOGYREVIEW.COM
    AI means the end of internet search as weve known it
    We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, its just fetching information thats already out there on the internet and showing it to you, in some sort of structured way. But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, were entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, youll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way. Of course, Googlethe company that has defined search for the past 25 yearsis trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as one of the most positive changes weve done to search in a long, long time. AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like Im going to Japan for one week next month. Ill be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing? And youll get an answernot just a link to Reddit, but a built-out answer with current results. More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You dont have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. Its amazing, and once you start searching that way, its addictive. And its not just Google. OpenAIs ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a move fast, break things ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrievalthe next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a zero-click future, where search referral traffica mainstay of the web since before Google existedvanishes from the scene. I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources. On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexitys story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasnt alone. Forbes, the New York Times, and Cond Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages. People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didnt really leave any reason to click through to the original. In fact, on Perplexitys About page, the first reason it lists to choose the search engine is Skip the links. But this isnt just about publishers (or my own self-interest). People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff upthey can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and youll see. Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If its not exactly broken (data shows more people are searching with Google more often than ever before), its at the very least increasingly cluttered and daunting to navigate. Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know? In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didnt tell you what was in those filesjust their names. It didnt preview images; it didnt have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good. Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guys home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed. And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late 90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were gooda huge improvement. At least at first. But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. And then came Google. Its hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing. Google CEO Sundar Pichai describes AI Overviews as one of the most positive changes weve done to search in a long, long time.JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.) But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Googles chief scientist for search. Its not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets, he says, rattling off a litany of Googles steps over the years to answer questions more directly. Its true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answerthe live score to a game, the hours a caf is open, or a snippet from the FDAs websiterather than being pointed to a website where the answer may be. But once youve used AI Overviews a bit, you realize they are different. Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Googles Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). Its in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language models predictive text combined with an index of the web. I think its an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. Weve been using LLMs and generative AI to improve our understanding of all that, Pichai told MIT Technology Review. But now we are able to generate and compose with that. The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.) [The companys] mission is organizing the worlds information, Liz Reid, Googles head of search, tells me from its headquarters in Mountain View, California. But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the worlds information or making it truly useful and accessible to you. That second conceptaccessibilityis what Google is really keying in on with AI Overviews. Its a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video. When it doesnt have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where weve been in the past decade, says Pichai. There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesnt have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didnt go well. Google, long the worlds reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queriesthose designed to trip it up. But still. It didnt look good. The company quickly went to work fixing the problemsfor example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that MIT Technology Review launched its online presence in late 2022. This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out? I came across several examples like this, both in Google and in OpenAIs ChatGPT search. Stuff thats just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. When we produce AI Overviews, says Nayak, we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you dont want to look further we hope that you will still get a reliable, trustworthy answer. In the case above, the 2022 answer seemingly came from a reliable sourcea story about MIT Technology Reviews email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beingsratersto evaluate the results it delivers for accuracy. Ratings dont correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too. Raters who look at your experiments may not notice the hallucination because it feels sort of natural, says Nayak. And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someones able to point out and say, Thats a problem. The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine Google The search giant has added AI Overviews to search results. These overviews take information from around the web and Googles Knowledge Graph and use the companys Gemini language model to create answers to search queries. What it's good at Googles AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most internety. But web publishers fear its summaries will give people little reason to click through to the source material. Perplexity Perplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. Its also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPT While Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questionslike planning a vacation through multiple search sessions. OpenAI says users sometimes go 20 turns deep in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the companys ability to maintain accuracy even with the LLM generating responses. Thats because AI Overviews is based on Googles flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web. Youre always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. Id say 99-point-few-nines. I think thats the bar we operate at, and it is true with AI Overviews too, he says. And so the question is, are we able to do this again at scale? And I think we are. Theres another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someones darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesnt just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful. If you go and say How do I build a bomb? its fine that there are web results. Its the open web. You can access anything, Reid says. But we do not need to have an AI Overview that tells you how to build a bomb, right? We just dont think thats worth it. But perhaps the greatest hazardor biggest unknownis for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result? Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend. If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble, he says. Dont panic, is Pichais message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. The underlying principle is people are coming looking for information. Theyre not looking for Google always to just answer, he says. Sometimes yes, but the vast majority of the times, youre looking at it as a jumping-off point. Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesnt have to rank for the generic query. Im going to start with something risky, Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and hes showing off OpenAIs new web search tool a few weeks before it launches. I should normally try this beforehand, but Im just gonna search for you, he says. This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet. He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, its the right answer. Phew? A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI wont say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI wont reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it. Theres an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you. Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search arent yet challenging Googles search dominance. It does not appear to be cannibalizing classic forms of web search, he says. OpenAI insists its not really trying to compete on searchalthough frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more. I come at it from the perspective of How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis? And thats where search comes in for us, Kevin Weil, the chief product officer with OpenAI, tells me. Theres an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you. Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPTs interface has long been, well, boring, search results bring in all sorts of multimediaimages, graphs, even video. Its a very different experience. Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Googleeven more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. Its mostly setting large amounts of money on fire right nowits projected to lose $14 billion in 2026, by some reports. But one thing it doesnt have to worry about is putting ads in its search results as Google does. For a while what we did was organize web pages. Which is not really the same thing as organizing the worlds information or making it truly useful and accessible to you, says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wantsto be more current than the language modelit also has to bring in information from all sorts of publishers and sources that it doesnt have deals with. OpenAIs head of media partnerships, Varun Shetty, told MIT Technology Review that it wont give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed mewhen Turley ran that name searchit described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read Ive ever written. But ChatGPT didnt link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, risky. When I asked him about it, he couldnt really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifyingsometimes with the help of userswhat it considers better answers, but the model actually selects them. And in many cases, it gets it wrong, which is why we have work to do, said Turley. Having a model in the loop is a very, very different mechanism than how a search engine worked in the past. Indeed! The model, whether its OpenAIs GPT-4o or Googles Gemini or Anthropics Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers. It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from mobile first to AI first: But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally availablebe it at home, at work, in the car, or on the goand interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent. Were there nowsort of. And its a weird place to be. Its going to get weirder. Thats especially true as these things we now think of as distinctquerying a search engine, prompting a model, looking for a photo weve taken, deciding what we want to read or watch or hear, asking for a photo we wish wed taken, and didnt, but would still like to seebegin to merge. The search results we see from generative AI are best understood as a waypoint rather than a destination. Whats most important may not be search in itself; rather, its that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. A ChatGPT that can understand and access the web wont just be about summarizing results. It might be about doing things for you. And I think theres a fairly exciting future there, says OpenAIs Weil. You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. Its just once the model understands how to use the internet, the skys the limit. This is the agentic future weve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets. Lets say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travelall without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You wont have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed. Its not always going to be just doing search and giving answers, says Pichai. Sometimes its going to be actions. Sometimes youll be interacting within the real world. So there is a notion of universal assistance through it all. And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. Show me what a Townsends warbler looks like in the tree in front of me. Or Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks. We have primarily done it on the input side, he says, referring to the ways Google can now search for an image or within a video. But you can imagine it on the output side too. This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionalitythe ability to take one type of input and convert it into a variety of outputstransforming the way we interact with information. In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around youonline and off, audible and visualand have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses. But you can imagine things going a bit further (and they will). Lets say I want to see a video of how to fix something on my bike. The video doesnt exist, but the information does. AI-assisted generative search could theoretically find that information somewhere onlinein a user manual buried in a companys website, for exampleand create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledgeknowledge thats previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhereand introduce a model into all that. A model that maybe cant understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. Thats what were on the cusp of, and what were starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? Its all changing so quickly. Hang on, just hang on.
    0 Comments 0 Shares 34 Views
  • WWW.ARCHITECTSJOURNAL.CO.UK
    Feltham Park masterplan
    The winning team selected for the estimated 75,000 contract will engage with local stakeholders and the community to co-design a masterplan for Feltham Park which has been earmarked to become a new destination space for the borough.The project planned to complete in 2027 will redesign and enlarge existing childrens play areas while also introducing new adolescent-to-young adult spaces providing risk-based play and sports such as BMX.New active cooling and climate resilience measures will also be introduced along with initiatives to increase biodiversity and to highlight local heritage.AdvertisementAccording to the brief: The appointed team will have demonstrable experience of co-design and delivering urban parks and green spaces through the project cycle from concept design to completion of construction.The team will understand that the successful delivery of high-quality public spaces must be co-created with a range of stakeholders and the local community. The team will have the expertise to use a variety of engagement methods to ensure all voices are heard.They will also hold the necessary skills to take these influences and shape the spaces into high quality, aesthetically pleasing designs that will withstand the test of time.Located 17.5km west of Charing Cross, Hounslow is a large suburban borough that is home to more than 288,000 people. Local town centre landmarks include the Hounslow Civic Centre by Sheppard Robson.The latest contract notice comes seven months after Hounslow Council announced it was seeking an exciting and ambitious design team to rethink the retail offer at Grade I-listed Boston Manor House in west London.AdvertisementThe local authority also launched a search for a design team for a public realm upgrade of Hounslow High Street in May last year. Henley Halebrown with ZCD Architects and nimtim architects won planning permission for a 209-home estate redevelopment in Brentford three years ago.Bids for the latest commission will be evaluated 60 per cent on quality and 40 per cent on price. Applicants must hold employers liability insurance of 10 million, public liability insurance of 10 million and professional indemnity insurance of 5 million.Competition detailsProject title FELTHAM ARENAS PHASE 2 MASTERPLANClientContract value 75,000First round deadline 1pm, 4 February 2025Restrictions TbcMore information https://www.contractsfinder.service.gov.uk/notice/ac91f06a-4c18-44ae-b2eb-1b088f01b65c
    0 Comments 0 Shares 30 Views
  • WWW.ARCHITECTSJOURNAL.CO.UK
    Harvard GSD Wheelwright Prize 2025
    The Wheelwright Prize is open to all graduates around the world awarded a degree from a professionally accredited architecture program within the past 15 years. No links to Harvard or GSD are required.Submissions should include a portfolio of previous relevant work and a two-year research proposal that will involve travel outside of the applicants home country.Applications are encouraged to consider the various formats through which architectural research and practice can be expressed, including but not limited to built work, curatorial practice and written output.AdvertisementAccording to the brief: The annual Wheelwright Prize is dedicated to fostering expansive, intensive design research that shows potential to make a significant impact on architectural discourse. The prize is open to emerging architects practicing anywhere in the world.The winning architect is expected to dedicate roughly two years of concentrated research related to their proposal, and to present a lecture on their findings at the conclusion of that research.Throughout the research process, Wheelwright Prize jury members and other GSD faculty are committed to providing regular guidance and peer feedback, in support of the projects overall growth and development.Founded in 1874, GSD is a specialist graduate school teaching architecture, landscape architecture, urban planning, urban design, real estate, design engineering and design studies. Its 13,000 alumni include Charles Jencks, Jeanne Gang and Ayla Karacebey.GSD completed a makeover of Richard Rogers Grade II*-listed Wimbledon house in south London six years ago. Known as 22 Parkside, the building now serves as the residence and research base for international students under the Richard Rogers Fellowship as well as a venue for GSD.AdvertisementThe Wheelwright Prize set up as a travelling fellowship in 1935 in honour of Arthur W Wheelwright was relaunched in its current form 11 years ago. The prize is now open to architecture graduates around the world but was originally only open to GSD alumni with previous recipients including IM Pei and Paul Rudolph.Last years winner was RCA senior lecturer Thandi Loewenson for her proposalBlack Papers: Beyond the Politics of Land, Towards African Policies of Earth and Air using aerial surveying techniques to explore dynamic social and spatial relations in contemporary Africa.The winner of the 2023 prize was awarded to AA graduate, architect and filmmaker to Jingju (Cyan) Cheng whose proposal Tracing Sand: Phantom Territories, Bodies Adrift focused on the economic, cultural, and ecological impacts of sand mining and land reclamation.Judges for the 2025 prize will be announced this month. Submissions will be judged on the originality of the proposal, quality of design work, previous scholarly achievements, ability to fulfill the proposal and potential for the proposed project to make important and direct contributions to architectural discourse.Competition detailsProject title Wheelwright Prize 2025Client Harvard University Graduate School of DesignContract value $100,000First round deadline 9 February 2025Restrictions The primary eligibility requirement is that applicants must have received a degree from a professionally accredited architecture program in the past 15 years. An affiliation with the GSD is not requiredMore information https://wheelwrightprize.org/
    0 Comments 0 Shares 29 Views
  • WWW.CNET.COM
    Snag These Cute and Convenient Anker Nano Chargers for Just $15 Apiece
    Even an all-day phone battery won't last forever. And if don't want to be fighting for the last outlet at a coffee shop, it's a good idea to keep a portable charger on you. These handy little Anker Nano power banks are light enough that you'll hardly notice them in your bag or pocket, and right now you can snag them at an all-time low price. There are both USB-C and Lightning models on sale at Woot, and you can grab one for just $15 (50% off), or save a little more and grab a two-pack for just $27 (55% off). However, this offer is only valid through Jan. 7 -- or until it's sold out -- so we'd recommend getting your order in sooner rather than later. See at WootAnker makes a wide array of portable chargers, and it's a brand that frequently makes our list of thebest portable power banks. With this charger, you don't need a cord. The Nano plugs directly into a phone via its built-in Lighting or USB-C connector.It's a portable bank with 5,000-mAh battery power, and with a 12W output, it can charge an iPhone 15 from 0 to 43% in just 30 minutes. It's also extremely compact at just over 3 inches long and 1.45 inches wide, and it weighs in at just 3.5 ounces, a big perk because a lot of portable banks can be quite heavy.Hey, did you know?CNET Deals texts are free, easy and save you money.We also have a list of the best portable power banks for iPhones and Android devices if you'd like to shop around a bit before deciding what to buy. Top deals available today, according to CNET's shopping experts Curated discounts worth shopping while they last Apple AirTag, 4-pack: $70 (save $29)Costco 1-year Gold Star membership + $20 gift card: $65 (save $20)Anker 20-watt USB-C charger, 2-pack: $12 (save $7)Levoit LVAC-200 cordless vacuum: $160 (save $40)Peloton Bike: $1,145 (save $300) Why this deal mattersAnker makes quite a few of our favorite portable chargers on the market right now, so it's a brand that we trust and recommend for chargers and other mobile accessories. So even though the Nano may not have earned a spot on our list, it's still an excellent option and a great value at 50% or more off, which drops it to a new record-low price.
    0 Comments 0 Shares 31 Views