0 Commentarios
0 Acciones
126 Views
Directorio
Directorio
-
Please log in to like, share and comment!
-
WWW.WIRED.COMThe Trial at the Tip of the Terrorgram IcebergAtomwaffen Division cofounder and alleged Terrorgram Collective member Brandon Russell is facing a potential life sentence for an alleged plot on a Baltimore electrical station. His case is only the beginning.0 Commentarios 0 Acciones 119 Views
-
WWW.WIRED.COM9 Best Juicers (2025): Centrifugal, Slow, MasticatingStay hydrated with tried-and-tested juicers for refreshing daytime beverages and eye-brightening evening drinks.0 Commentarios 0 Acciones 121 Views
-
WWW.NYTIMES.COMDeepSeek Shows Metas A.I. Strategy Is WorkingThe Silicon Valley giant was criticized for giving away its core A.I. technology two years ago for anyone to use. Now that bet is having an impact.0 Commentarios 0 Acciones 119 Views
-
WWW.NYTIMES.COMWhy Trump Picked a Science Advisor, Michael Kratsios, Who Isnt a ScientistMichael Kratsios, who served in the White House and Defense Department in the first Trump administration, is a policy specialist on artificial intelligence.0 Commentarios 0 Acciones 121 Views
-
WWW.MACWORLD.COMMac Backup Guru ReviewMacworldAt a glanceExpert's RatingProsFriendly user interface, easily customizable inclusion and exclusion options for copyingHandy scheduler app functions well and the program works to offer more control over snapshot archives than Time MachineExcellent, fully-featured 30-day demo lets you truly put the program through its pacesConsLacks some polish, and full disk access could be offered in one fell swoop as opposed to offering access to specific drive foldersProgram doesnt offer a full count as to exactly how much drive space will be needed on the destination driveError messages sometimes wind up in the background behind the main windowOur Verdict This goes to show what a one-person shop is capable of, and there are some good utilities coming out of MacDaddy. A little polish and easier access to full disk access could go a long way, but otherwise this is a backup utility worth looking into if you want more control over snapshot archives than Apple tends to offer with Time Machine. Its here, its available at a good price, and its worth your Consideration.Price When ReviewedThis value will show the geolocated pricing text for product undefinedBest Pricing TodayThey may not always have the marketing budget to continuously flash their products on your Facebook or YouTube feeds, but theres a lot to be said for the one-person indie developers cranking out Mac utilities. Case in point, Benedict Slaney, whose one-person MacDaddy outfit is currently responsible for nine separate Mac utilities on the market, including Mac Backup Guru, which over the years has drawn a considerable following as a nifty alternative to Apples free Time Machine utility as well as the popular Carbon Copy Cloner and SuperDuper! Mac Backup Guru, which requires Mac OS X 10.10 (Yosemite) or later to install and run and currently retails for $9/7.51 as of this writing, though its listed as $29/24.15, and is available as a fully functional 30-day demo.See how Mac Backup Guru compares to other Mac back up apps in our round up of the best Mac backup software weve reviewed. For cloud based backups read: Best cloud storage for Mac.Mac Backup Guru offers whats expected from a backup utility in terms of core backup and archival functionalities and works to take it a step further, even if its execution could use work at times. The core functionality, which is assisted by a nice animated user interface, offers both a synchronized clone backup, which can create standard copies of folders and drives as well as incremental backups, which track changes and back up the changes, similar to what Apple does with its Time Machine archives. A handy scheduler feature works like a charm, making it easy to set up tasks per the applications calendar, and the number of recent snapshots can be designated as well before older snapshots are deleted. Its easy to set the source and destination volumes, and you can pick and choose specific volumes and folders to include and exclude, the applications easily customizable settings offering tight control over this.Scheduling your backups if flexible.FoundryWhere Mac Backup Guru truly works to differentiate itself is in its creation of snapshots and how it handles these. While Apples Time Machine has been lauded for creating snapshots of your hard drive at intermittent times throughout the day, theres no hard and fast way to control when these are created or when theyre deleted. Mac Backup Guru offers this, and along with a good backup/cloning feature, works to not only move all the local files over to where they need to be, but also taps into iCloud-related tools should you select this option, making a copy of all your iCloud-based files to help complete the cloned volume.Still, theres room for improvement. Instead of asking for full disk access upon installation and/or first launch, Mac Backup Pro continuously asks for permission to access different folders, and while you can set this up in System Settings > General > Privacy & Security > Full Disk Access, the app itself never makes the jump to offer this to the user in the first place. Perhaps the biggest error that needs a fix is that the app never truly does a head count as to exactly how much space itll need for a copy, and after it begins copying necessary files from iCloud, you sometimes find yourself rolling the dice as to whether you have enough space on the destination drive for the full copy. Couple this with the application sometimes placing its error message that youve run out of space in the background as opposed to the foreground and this becomes awkward. Finally, and in as much as I respect the developers efforts with Mac Backup Guru and his other apps, I never heard back from them after reaching out via their websites contact form, which was a little disappointing.ScreenshotFoundryTheres a lot of good features to be had with Mac Backup Guru and the fact that it offers a tighter level of control over archives and snapshots than Apples Time Machine feature is impressive, but the application still needs some polish to live up to what it truly could be. Still, you can see where its drawn its fan base from, and Slaney Benedict puts forth an astounding effort in being a one-person shop. Combine this with an excellent 30-day demo that offers no restrictions, doesnt bombard you with advertising in any way, and just lets you feel out what the application can do, and Mac Backup Guru is worth looking into.Should you buy Mac Backup Guru?As ambitious as Mac Backup Guru is, there are still some bugs to sort out and polish that can be applied to make a good application truly great, and I cant advise that you run out and buy this yesterday. Still, the fully-featured 30-day demo is outstanding to work with, and I would recommend you download, install, and try it out posthaste to see what you make of the program.0 Commentarios 0 Acciones 121 Views
-
WWW.MACWORLD.COMSome iPhone users will soon get a secret addition after upgrading to iOS 18.3MacworldWe thought we had learned all there is to know about iOS 18,3 when it arrived earlier this week, but apparently theres one more feature Apple didnt tell us about. As reported by Mark Gurman of Bloomberg, Apple has been working with SpaceX andT-Mobile USto bring support for the Starlink network in iOS 18.3, and customers can test it right now.T-Mobile opened beta registration for Starlink in December but it was limited to Samsung phones, including the Galaxy Z Flip 6, Galaxy Z Fold 6, and S24 phones with no mention of the iPhone. However, it did promise that Starlink would support the vast majority of modern smartphones.There are some caveats. At the start, the service only supports texting via satellite and is limited to the United States. T-Mobile also says users will not need to hold their phone up to search for a signal, a clear reference to Apples existing Emergency SOS via satellite feature that requires a clear view of the sky and horizon. However, Apples service also allows emergency calls as well as texting friends and family when outside normal coverage.That feature was introduced with the iPhone 14 and was supposed to be free for two years after activation of the iPhone. However, Apple hasnt announced any plans to charge for the service or disable it on phones after two years have elapsed.Gurman reports that T-Mobile plans to expand the beta in February and Starlink parent SpaceX requested authority to begin beta-testing the service starting Monday. The Starlink beta is free for all users but T-Mobile has indicated that the service will cost a monthly fee. Starlink currently charges $120 per month for 30-150Mbps internet speeds but doesnt have an emergency satellite plan.0 Commentarios 0 Acciones 125 Views
-
WWW.COMPUTERWORLD.COMUS officials probe Chinas DeepSeek AI amid security and privacy scrutinyThe Chinese AI app DeepSeek has come under intense scrutiny from both the US and European regulators, raising alarms over national security risks, data privacy concerns, and potential intellectual property theft.The White House confirmed on Tuesday that the National Security Council (NSC) is reviewing the AI models implications as fears mount that Chinese advancements in AI could threaten the dominance of US-based AI firms including OpenAI and Google.This is a wake-up call to the American AI industry, White House Press Secretary Karoline Leavitt said responding to reporters during her first press briefing, reinforcing the administrations commitment to ensuring US leadership in AI.Leavitt also confirmed during the briefing that she had personally discussed the matter with the NSC.Meanwhile, Italys data protection authority, the Garante, launched its own investigation into DeepSeek, demanding clarity on its data collection practices. The Italian regulator has given DeepSeek and its affiliated companies 20 days to respond, making it one of the first regulatory bodies to take direct action against the Chinese AI startup.The Authority, considering the potential high risk for the data of millions of people in Italy, has asked the two companies and their affiliates to confirm which personal data are collected, from which sources, for which purposes, what is the legal basis of the processing, and whether they are stored on servers located in China, the regulator said in a statement.The Garante is seeking details about the personal data collected, its sources, its legal basis for processing, and whether any data is stored in China raising broader concerns over data sovereignty and compliance with Europes stringent privacy laws.Italys move comes amid broader concerns about foreign AI models compliance with regional regulations. The country had previously banned OpenAIs ChatGPT in 2023 over potential violations of EU privacy rules, demonstrating its proactive stance in regulating AI models that handle personal data.According to its privacy policy, DeepSeek explicitly says it can collect your text or audio input, prompt, uploaded files, feedback, chat history, or other content and use it for training purposes, research firm Forrester said in a statement. It also states it can share this information with law enforcement agencies [and] public authorities at its discretion.Forrester suggested enterprises to Educate and inform your employees on the ramifications of using this technology and inputting personal and company information into it. Align with product leaders on whether developers should be experimenting with it and whether the product should support its implementation without stricter privacy requirements.IP theft concerns deepen in the USBeyond privacy issues, US officials are also raising alarms over the possibility of intellectual property (IP) theft tied to DeepSeek.Trumps AI and crypto policy lead, David Sacks, suggested that China may have leveraged a technique called distillation, where AI models learn from other advanced systemsraising the possibility that US-developed AI technology may have been replicated without authorization.I think one of the things youre going to see over the next few months is our leading AI companies taking steps to try and prevent distillation, Sacks said in a Fox News interview. That would definitely slow down some of these copycat models.DeepSeeks open source nature opens it up for exploration by both adversaries and enthusiasts, said Chester Wisniewski, director and global field CTO at Sophos. Like Llama, it can be played with and largely have the guardrails removed. This could lead to abuse by cybercriminals, although its important to note that running DeepSeek still requires far more resources than the average cybercriminal has.During his administration, former President Joe Biden imposed sweeping export controls on AI-related technologies to slow Chinas AI progress. Now, Trumps administration is signaling it may take even stronger measures.Market reactions and competitive pressureThe emergence of DeepSeek has already rattled the tech industry. On Monday, global investors dumped shares of major US AI companies, fearing the rise of a low-cost Chinese competitor. DeepSeek, which presents itself as a budget-friendly alternative to AI models like OpenAIs ChatGPT, has quickly gained traction briefly overtaking ChatGPT as the top AI assistant on Apples App Store in the US.More pressing for companies, however, is that, due to its cost-effectiveness, we are likely to see various products and companies adopt DeepSeek, which potentially carries significant privacy risks, Wisniewski added. As with any other AI model, it will be critical for companies to make a thorough risk assessment, which extends to any products and suppliers that may incorporate DeepSeek or any future LLM. They also need to be certain they have the right expertise to make an informed decision.Despite the concerns, Trump framed the development as an opportunity for American firms to step-up their innovation efforts. The release of DeepSeek AI from a Chinese company should be a wake-up call for our industries, Trump told House Republicans at their annual policy retreat. We need to be laser-focused on competing to win.Trump also suggested that competition from China could drive American firms to develop AI models at lower costs. We always have the ideas. Were always first, he said, signaling confidence in US ingenuity.Industry watchers too believe the rise of DeepSeek will accelerate competition.DeepSeeks approach is expected to accelerate the shift toward open-source AI, compelling tech giants to either adapt or risk being left behind, said Muskaan Jain, senior analyst at Everest Group. This shift will likely trigger price wars, faster AI development cycles, and heightened geopolitical tensions over AI dominance.Whats next?With regulatory scrutiny intensifying on both sides of the Atlantic, DeepSeeks fate in key global markets remains uncertain. If the NSCs review results in policy recommendations, American AI firms may soon face even stricter export controls and regulations designed to prevent unauthorized AI knowledge transfers.Meanwhile, DeepSeeks compliance with European privacy regulations could dictate whether it maintains a foothold in markets like Italy and beyond. If found in violation, the Chinese AI firm could face penalties or access restrictions, echoing past actions against OpenAI.For US enterprises, these developments signal both risks and opportunities. While competition from China grows, American firms will be under increasing pressure to maintain their technological edge while navigating evolving regulatory frameworks. In an industry moving at an unprecedented pace, policymakers and business leaders alike will need to balance innovation with security and compliance. If DeepSeek secures a foothold in the AI market, western firms must respond with responsible innovation and strategic investments to maintain technological leadership while ensuring AI remains ethical and secure in an increasingly fragmented global market, Jain added.0 Commentarios 0 Acciones 128 Views
-
WWW.COMPUTERWORLD.COMThe AI bust is hereHands up! Who remembers the dot.com bust?I do. It was March 10, 2000. The NASDAQ Composite index peaked at 5,048.62 points. That doesnt sound like much today, but it was up 800% from 1995. Then people started to get nervous about the true value of the dot.com businesses driving the market, interest rates started to rise, and by October 2002, the NASDAQ had fallen 78% from its peak.Were not even a month into 2025, and the new Chinese AI programDeepSeekthis week sparked Nvidias $465 billion rout, the biggest single-day drop in US stock market history.This is just the beginning. Its not thatDeepSeek is so much better than ChatGPTs OpenAIor any of the other popular generative AI (genAI) tools. Maybe it is, maybe it isnt. (Just dont ask it about what happened at Tiananmen Square in 1989.)No, what matters is that DeepSeek requires an order-of-magnitude less computing power to achieve similar results. By programmer Simon Willisons count, DeepSeek v3 trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000.For comparison, Meta AIs Llama 3.1 405B (fewer than DeepSeek v3s 685B parameters)trained on 11x that; it required more than 30.8 million GPU hours and was trained on 15 trillion tokens of data.What does all that mean? As Larry Dignan, editor-in-chief ofConstellation Insights, explained, You can get API access to DeepSeeks R1 model for 14 cents for a million tokenscompared to OpenAIs $7.50.In other words, large language models (LLM) pricing is going to collapse.The scary part is the LLM giants didnt have profit margins to begin with, he continued. LLMs are going to commodity in a hurry. Thats bad news for companies such as OpenAI. Sure,the companys market value stands at $157 billion but its still losing billions of dollars. If OpenAIs customers decide they dont want to pay its rates when they can get much the same service for a fraction of the cost from DeepSeek, where does that leave OpenAI?What happens then? Look at it this way: todays overheated stock market is driven by theMagnificent Seven companies that with one exception (Tesla) have one thing incommon: theyre all heavily invested in AI hype. Even Tesla is heavily invested in what Elon Musk calls real-world AI.Its not just the Magnificent Seven; pick-and-shovel AI companies such as Oracle, Super Micro, and Nebius (formerly Yandex)are also in a world of trouble.If anyone can buildLLMswithout a ton of money, why exactly is Nvidia worth trillions? Couldnt someone disrupt Microsoft, Meta, or Google? Indeed,Perplexitys AI chatbot is already better than Google at search. Its not just people like me for whom search is part of what they do for a living;businesses are also turning to Perplexity from Google.Bratin Saha,DigitalOceans chief product and technology officer, sees the potential for a relatively small company to become a big player. DeepSeek AI is the Android moment for AI (given that ChatGPT was likened to the iPhone moment for AI), as it shows groundbreaking AI products from the open-source community, Saha told me.DeepSeek AI democratizes AI and cloud computing because it shows you do not need multi-billion dollar investments for compelling innovation, he said. It lowers the barrier for small and medium enterprises and individual developers to work with AI.Exactly.The way Stephen OGrady, co-founder and industry analyst withRedMonk,sees it, enterprises have two major AI concerns. How trustworthy the technology is and its cost. As OGrady points out, Enterprises have been shocked, in many cases, at the unexpected costs and unclear returns from some scale investments in AI.Hes got that right. OGrady continued, noting that DeepSeek challenges some core assumptions by enterprises. These are:What if enterprises dont have to rely on closed, private models for leading-edge capabilities?What if training costs could be reduced by an order of magnitude or more?What if they dont require expensive, state-of-the-art hardware to run their models?This changes everything for businesses that want to use AI and multi-billion dollar companies that want to sell AI to you. Its a sea change.What the markets reacted to was DeepSeeks ability to build a model that rivaled OpenAIs sophisticated o1 reasoning model and Anthropics Sonnet model for pennies on the dollar on a fraction of the compute, Jim Zemlin, executive director of theLinux Foundation, wrote. It also appears that DeepSeek did this using well-known techniques. There were no massive algorithmic breakthroughs, just very clever engineering.DeepSeek, he noted, uses open-source software. Yes, technically, part of it isnt open source, but one ofDeepSeeks models is under the MIT License. As Stefano Maffulli, the Open Source Initiative (OSI) executive director, pointed out,DeekSeeks models also fail to be open.Still,a small team in China took a fresh look at a problem and came up with a novel approach that reduced the cost of chain-of-thought reasoning by 50x (if DeepSeeks postings are accurate) and then published a paper fully describing their process, allowing the community to benefit from their learnings, Zemlin said. We need MORE of this progress, not less. This is a struggle over open markets between the forces of open and the forces of closed.That may be good for open source and for AI, but for our stock market, which is largely driven by AI, its another matter. The Magnificent Sevenaccounted for over 50% of the S&P 500s gains in 2024. If they collapse, a good chunk of the market will follow.Im not normally a bear, but Ive seen this rodeo before. Oh, AI will survive, and eventually, it will prove useful and profitable. But, in the meantime, I can see an AI crash coming, and it wont be pretty.0 Commentarios 0 Acciones 133 Views
-
WWW.TECHNOLOGYREVIEW.COMThree reasons Meta will struggle with community fact-checkingEarlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more democratic approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the companys platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Metas are significantly more popularFacebook alone is used by 70% of Americans, according to Pew).Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts.Im a community moderator who researches community moderation. Heres what Ive learned about the limitations of relying on volunteers for moderationand what Meta needs to do to succeed:1. The system will miss falsehoods and could amplify hateful contentThere is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manneror at all. Consider how a post with a picture of a death cap mushroom and the caption Tasty might be handled under Community Notesstyle moderation. If an expert in mycology doesnt see the post, or sees it only after its been widely shared, it may not get flagged as Poisonous, do not eatat least not until its too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely).Crucially, Xs Community Notes arent visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it wont show up. X does not make a note visible until theres agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but its not foolproof. It still relies on peoples opinions about a note and not on actual facts. Often whats needed is expertise.I moderate a community on Reddit called r/AskHistorians. Its a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of peopleprobably with very different voting patterns and very different opinions on a lot of topicsnot only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who arent experts in history will upvote outdated, truthy-sounding answers that arent actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow.r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notesstyle program to work, it should make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when theres a misalignment between common understanding and expert knowledge.2. It wont work without well-supported volunteersMetas paid content moderators review the worst of the worstincluding gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe.Community moderators dont have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abusesomething we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators morale high(ish).Its unclear how Metas new moderation system will be structured. If volunteers choose what content they flag, will that replicate Xs problem, where partisanship affects which posts are flagged and how? Its also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Metathe company that is currently being sued for damaging the mental health of its paid content moderatorsprovide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesnt unduly influence the notes).3. It cant work without protections and guardrailsOnline communities can thrive when they are run by people who deeply care about them. However, volunteers cant do it all on their own. Moderation isnt just about making decisions on whats true or false. Its also about identifying and responding to other kinds of harmful content. Zuckerbergs decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content.I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with.But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers cant do it all on their own. Its only one tool in the box.If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isnt what this decision aims to do. Its a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better.Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.0 Commentarios 0 Acciones 133 Views