0 Comments
0 Shares
180 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.ZDNET.COMOne of the best Ring cameras I've tested is 50% off for a limited timeYou can save $90 at Amazon on what is arguably one of the best Ring cameras available -- the Ring Stick Up Cam Pro.0 Comments 0 Shares 130 Views
-
WWW.ZDNET.COMOpenAI tailored ChatGPT Gov for government use - here's what that meansChatGPT will be making its way to federal, state, and local agencies. The new version comes with benefits - and concerns.0 Comments 0 Shares 126 Views
-
WWW.FORBES.COMJudge Throws Out Facial Recognition Evidence In Murder CaseFacial Recognition System. gettyIn a recent ruling that underscores the growing debate over artificial intelligence in criminal investigations, an Ohio judge has excluded facial recognition evidence in a murder case, effectively preventing prosecutors from securing a conviction. The decision raises broader concerns about the reliability and transparency of facial recognition technology in law enforcement and the legal challenges it presents when used in court, The Record reports.The case involves the fatal shooting of Blake Story in Cleveland in February 2024. With no immediate leads, investigators turned to surveillance footage taken six days after the crime. They used Clearview AI, a controversial facial recognition software, to identify a suspect, Qeyeon Tolbert.Acting on this identification, police obtained a search warrant for Tolberts residence, where they recovered a firearm and other evidence. But as the trial approached, a flaw in the investigation came to light: police had not independently corroborated Tolberts identity before executing the search warrant, nor had they disclosed the use of facial recognition in their affidavit.On Jan. 9, the judge ruled in favor of a defense motion to suppress the evidence, stating that the warrant was granted without proper probable cause. With the firearm and other key evidence excluded, prosecutors were left with little to move forward on, forcing them to file an appeal. Without the suppressed evidence, the state has acknowledged that securing a conviction will be extremely difficult.This case is one of the latest in a growing list of legal challenges surrounding facial recognition technology. While law enforcement agencies argue that AI-driven identification speeds up investigations, defense attorneys and privacy advocates warn that overreliance on these tools can lead to wrongful arrests, constitutional violations and breaches of due process.MORE FOR YOUHow Facial Recognition Plays a Role in Law EnforcementFacial recognition software has become an increasingly common tool in criminal investigations. Programs like Clearview AI allow law enforcement agencies to compare suspect images against vast databases of photos scraped from social media, public websites, and other online sources. With an estimated 30 billion images in its system, Clearview AI is one of the largest facial recognition databases in the world.Proponents of the technology argue that it provides investigators with crucial leads when traditional methods fail. In cases where security footage captures an unknown suspect, facial recognition can rapidly generate potential matches, allowing law enforcement to act more quickly.However, Clearview AI itself acknowledges that its system is not designed to be the sole basis for arrests. The company warns that its results should be treated as leads rather than definitive proof of identity. Yet, a review of 23 police departments by The Washington Post found that at least 15 departments had made arrests based solely on facial recognition matches, raising concerns about the accuracy of these systems and the due diligence of law enforcement.The Challenges of Using Facial Recognition in CourtDespite its growing use, facial recognition remains controversial, particularly when it serves as the foundation for search warrants and arrests. Legal experts point to several key concerns:Accuracy and Bias Issues Facial recognition technology has been shown to be less accurate for people of color, women, and older adults, increasing the risk of wrongful identifications. A 2020 study by the National Institute of Standards and Technology found that many facial recognition systems exhibit racial and gender biases, leading to a higher rate of false positives for Black and Asian individuals.Lack of Transparency Some police departments do not disclose their use of facial recognition to suspects, defense attorneys, or even judges. This lack of transparency can violate due process rights, preventing defendants from fully challenging the evidence against them.Legal Admissibility Issues Courts are increasingly skeptical of facial recognition evidence, particularly when it is the sole or primary basis for a search warrant. In this case, the Ohio judge ruled that because police had not independently verified Tolberts identity before obtaining a warrant, the search and seizure of evidence violated his forth amendment rights.Privacy and Surveillance Concerns The widespread use of facial recognition raises broader questions about mass surveillance. Critics warn that if unchecked, these technologies could enable warrantless tracking of individuals in public spaces, blurring the lines between necessary policing and civil liberties violations.The Future of Facial Recognition in Criminal CasesThe Ohio case is a sign for law enforcement agencies relying on facial recognition as a core investigative tool. While AI-driven identification can assist in narrowing down suspects, courts are signaling that its use must be accompanied by traditional investigative work to establish probable cause.As more cases challenge the validity of AI-generated identifications, legal frameworks around facial recognition will likely evolve. Some states have already enacted restrictions on the technology. Maine, Massachusetts, and Illinois have passed laws limiting or banning law enforcement use of facial recognition without a warrant, citing privacy concerns.For now, the Ohio ruling is a reminder that while AI can assist human decision-making, it cannot replace the fundamental principles of due process. As courts continue to scrutinize the use of facial recognition in criminal cases, law enforcement agencies will need to ensure that they use these tools responsibly, with proper oversight and adherence to constitutional protections.0 Comments 0 Shares 128 Views
-
WWW.FORBES.COMAlibaba Unveils Qwen 2.5: A DeepSeek Rival?Chinese internet company Alibaba launches Qwen 2.5 generative AI model, taking aim at its fellow ... [+] China-based competitor DeepSeek-V3.gettyTheres been an escalation in the generative AI large language model wars as Alibaba Qwen 2.5 launched Wednesday. This latest AI salvo from China-based Alibaba is directly aimed at its in-country rival DeepSeek, which launched its own AI--DeepSeek-V3--in December 2024 and its R1 version in mid-January.What sets DeepSeek-V3 apart from the other foundation AI models such as Claude, ChatGPT, Gemini, Llama and Perplexity is that its unique design came online much faster than the dominant players and required much less computing power to train compared to the other systems.Why Alibaba Qwen 2.5 LaunchedBecause of its upgraded algorithm architecture, the V3 model reportedly produces comparable results as existing LLMs; however, the company states that it was able to train DeepSeek-V3 for less than $6 million using older Nvidia H800 GPU chips that debuted almost two years ago--thats almost a lifetime within tech circles. Shortly after its release on January 20th, the DeepSeek-R1 AI assistant--powered by V3--became the top download within Apples Top Free App category.On Tuesday, the reality of this achievement sunk in on Wall Street as investors sold off nearly $600 billion worth of Nvidia stock--questioning whether pricey next-gen GPUs such as its H200 and Blackwell processors will even be necessary. While Meta has reportedly scrambled to establish Llama war rooms to try and reverse engineer how the latest DeepSeek rollouts debuted so fast and cheap.Alibaba Qwen 2.5 Versus DeepSeek-V3But the premiere of DeepSeeks latest innovations didnt just take U.S.-based AI developers and chip makers off guard. Media outlets suggest that it spurred an AI upgrade by TikTok owner ByteDance and this latest AI launch of Alibabas Qwen 2.5. Its reported that Alibaba specifically called out DeepSeek in a WeChat post stating that Qwen 2.5 outperforms V3.MORE FOR YOUWhile its too early to tell, which AI model from China will come out on top there are concerns surfacing about potential risks for both platforms. Issues that plagued China-owned TikTok are being raised regarding Qwen and DeepSeek-V3 regarding data security, privacy, potential misreporting of performance stats and separate issues of possible intellectual property theft on behalf of OpenAI and Microsoft--which could call into question whether V3 was trained from scratch or leveraged other AI models.0 Comments 0 Shares 125 Views
-
TIME.COMWhy DeepSeek Is Sparking Debates Over National Security, Just Like TikTokBy Andrew R. ChowUpdated: January 29, 2025 12:00 PM EST | Originally published: January 29, 2025 11:28 AM ESTThe fast-rising Chinese AI lab DeepSeek is sparking national security concerns in the U.S., over fears that its AI models could be used by the Chinese government to spy on American civilians, learn proprietary secrets, and wage influence campaigns. In her first press briefing, White House Press Secretary Karoline Leavitt said that the National Security Council was "looking into" the potential security implications of DeepSeek. This comes amid news that the U.S. Navy has banned use of DeepSeek among its ranks due to potential security and ethical concerns.DeepSeek, which currently tops the Apple App Store in the U.S., marks a major inflection point in the AI arms race between the U.S. and China. For the last couple years, many leading technologists and political leaders have argued that whichever country developed AI the fastest will have a huge economic and military advantage over its rivals. DeepSeek shows that Chinas AI has developed much faster than many had believed, despite efforts from American policymakers to slow its progress.However, other privacy experts argue that DeepSeeks data collection policies are no worse than those of its American competitorsand worry that the companys rise will be used as an excuse by those firms to call for deregulation. In this way, the rhetorical battle over the dangers of DeepSeek is playing out on similar lines as the in-limbo TikTok ban, which has deeply divided the American public.There are completely valid privacy and data security concerns with DeepSeek, says Calli Schroeder, the AI and Human Rights lead at the Electronic Privacy Information Center (EPIC). But all of those are present in U.S. AI products, too.Read More: What to Know About DeepSeekConcerns over dataDeepSeeks AI models operate similarly to ChatGPT, answering user questions thanks to a vast amount of data and cutting-edge processing capabilities. But its models are much cheaper to run: the company says that it trained its R1 model on just $6 million, which is a good deal less than the cost of comparable U.S. models, Anthropic CEO Dario Amodei wrote in an essay.DeepSeek has built many open-source resources, including the LLM v3, which rivals the abilities of OpenAI's closed-source GPT-4o. Some people worry that by making such a powerful technology open and replicable, it presents an opportunity for people to use it more freely in malicious ways: to create bioweapons, launch large-scale phishing campaigns, or fill the internet with AI slop. However, there is another contingent of builders, including Metas VP and chief AI scientist Yann LeCun, who believe open-source development is a more beneficial path forward for AI. Another major concern centers upon data. Some privacy experts, like Schroeder, argue that most LLMs, including DeepSeek, are built upon sensitive or faulty databases: information from data leaks of stolen biometrics, for example. David Sacks, President Donald Trumps AI and crypto czar, accused DeepSeek of leaning on the output of OpenAIs models to help develop its own technology.There are even more concerns about how users data could be used by DeepSeek. The companys privacy policy states that it automatically collects a slew of input data from its users, including IP and keystroke patterns, and may use that to train their models. Users personal information is stored in secure servers located in the People's Republic of China, the policy reads.For some Americans, this is especially worrying because generative AI tools are often used in personal or high-stakes tasks: to help with their company strategies, manage finances, or seek health advice. That kind of data may now be stored in a country with few data rights laws and little transparency with regard to how that data might be viewed or used. It could be that when the servers are physically located within the country, it is much easier for the government to access them, Schroeder says.One of the main reasons that TikTok was initially banned in the U.S. was due to concerns over how much data the apps Chinese parent company, ByteDance, was collecting from Americans. If Americans start using DeepSeek to manage their lives, the privacy risks will be akin to TikTok on steroids, says Douglas Schmidt, the dean of the School of Computing, Data Sciences and Physics at William & Mary. I think TikTok was collecting information, but it was largely benign or generic data. But large language model owners get a much deeper insight into the personalities and interests and hopes and dreams of the users.Geopolitical concernsDeepSeek is also alarming those who view AI development as an existential arms race between the U.S. and China. Some leaders argued that DeepSeek shows China is now much closer to developing AGIan AI that can reason at a human level or higherthan previously believed. American AI labs like Anthropic have safety researchers working to mitigate the harms of these increasingly formidable systems. But its unclear what kind of safety research team Deepseek employs. The cybersecurity of Deepseeks models has also been called into question. On Monday, the company limited new sign-ups after saying the app had been targeted with a large-scale malicious attack.Well before AGI is achieved, a powerful, widely-used AI model could influence the thought and ideology of its users around the world. Most AI models apply censorship in certain key ways, or display biases based on the data they are trained upon. Users have found that DeepSeeks R1 refuses to answer questions about the 1989 massacre at Tiananmen Square, and asserts that Taiwan is a part of China. This has sparked concern from some American leaders about DeepSeek being used to promote Chinese values and political aimsor wielded as a tool for espionage or cyberattacks.This technology, if unchecked, has the potential to feed disinformation campaigns, erode public trust, and entrench authoritarian narratives within our democracies, Ross Burley, co-founder of the nonprofit Centre for Information Resilience, wrote in a statement emailed to TIME.AI industry leaders, and some Republican politicians, have responded by calling for massive investment into the American AI sector. President Trump said on Monday that DeepSeek should be a wake-up call for our industries that we need to be laser-focused on competing to win. Sacks posted on X that DeepSeek R1 shows that the AI race will be very competitive and that President Trump was right to rescind the Biden EO, referring to Bidens AI Executive Order which, among other things, drew attention to the potential short-term harms of developing AI too fast.These fears could lead to the U.S. imposing stronger sanctions against Chinese tech companies, or perhaps even trying to ban DeepSeek itself. On Monday, the House Select Committee on the Chinese Communist Party called for stronger export controls on technologies underpinning DeepSeeks AI infrastructure.But AI ethicists are pushing back, arguing that the rise of DeepSeek actually reveals the acute need for industry safeguards. This has the echoes of the TikTok ban: there are legitimate privacy and security risks with the way these companies are operating. But the U.S. firms who have been leading a lot of the development of these technologies are similarly abusing people's data. Just because they're doing it in America doesn't make it better, says Ben Winters, the director of AI and data privacy at the Consumer Federation of America. And DeepSeek gives those companies another weapon in their chamber to say, We really cannot be regulated right now.As ideological battle lines emerge, Schroeder, at EPIC, cautions users to be careful when using DeepSeek or other LLMs. If you have concerns about the origin of a company, she says, Be very, very careful about what you reveal about yourself and others in these systems.0 Comments 0 Shares 165 Views
-
WWW.TECHSPOT.COMAI creates glowing protein that would've taken nature 500 million years to evolveWhat just happened? Scientists have used AI to design the blueprints for an entirely new protein that has never existed in nature. This AI-generated protein, dubbed esmGFP, would have taken half a billion years to evolve naturally. And the best part? It glows. In a study published in Science, researchers detailed how they used advanced language models to fast-forward evolution, simulating hundreds of millions of years of genetic changes in just hours. The result? A synthetic version of green fluorescent protein (GFP) with an amino acid sequence only 58 percent similar to its closest natural counterpart.For the uninitiated, GFPs are biomolecules that give certain marine creatures like jellyfish their vivid glow. Scientists frequently use them as biomarkers, attaching their genes to other proteins of interest to make them fluoresce under a microscope.In nature, these glowing proteins evolved over eons through random genetic mutations. But the AI model behind this breakthrough, called ESM3, took a radically different approach. Instead of evolving proteins step by step like life on Earth, it was trained on a dataset of 2.78 billion known proteins using one trillion teraflops of computing power to generate entirely new hypothetical sequences.For esmGFP specifically, the AI coded 96 mutations that would take over 500 million years to naturally arise in organisms like jellyfish or corals.Alex Rives, co-founder of EvolutionaryScale, told Live Science that by inferring the fundamental biological rules, their model can create functional proteins that defy the constraints of natural evolution. Rives and his colleagues previously worked on precursor models to ESM3 at Meta before founding EvolutionaryScale last year. Just months later, the startup raised $142 million to advance its research. // Related StoriesHowever, not everyone is entirely convinced. Tiffany Taylor, an evolutionary biologist at the University of Bath, acknowledged to Live Science that the model holds promise for drug development and bioengineering. Still, she cautioned that AI protein models don't account for the complex selective forces shaping entire organisms.Despite these concerns, the study highlights how AI could dramatically expand the range of synthetic proteins available, with potential applications in medicine and environmental science."The model has the potential to accelerate discovery across a broad range of applications, ranging from the development of new cancer treatments to creating proteins that could help capture carbon," a press release from last year noted.0 Comments 0 Shares 136 Views
-
WWW.TECHSPOT.COMComcast introduces ultra-low latency tech for Xfinity Internet subscribersIn a nutshell: Comcast is rolling out an ultra-low lag connectivity experience designed to improve responsiveness when video chatting, playing games, and using virtual reality. Initially, customers will see the benefits when using select apps and services from partners like Apple, Nvidia, Meta, and Valve although eventually, any interested partner will be able to take advantage of it. Jason Livingood, vice president of technology policy, product and standards at Comcast, told VentureBeat that they believe they can cut lag down from hundreds of milliseconds to around 22-25 milliseconds. The result would be a smoother, more responsive end-to-end experience.Initially, customers will be able to experience the benefits of the tech when using FaceTime video chat on Apple devices, with Meta's mixed reality headsets, and when playing many games on Steam or Nvidia's GeForce Now platform.As The Verge highlights, the tech is based on an open standard known as L4S short for low latency, low loss, scalable throughput. The full technical details are a bit complex but in short, L4S delivers a much more efficient way for packets to inform devices about congestion and start immediately taking steps to fix it.A delay of 25 milliseconds here or there may not sound much, but it quickly becomes apparently when packets get hung up time after time as devices communicate back and forth with each other. It is why your 1 Gbps connection can feel slow at times. That advertised 1 Gbps relates to bandwidth (capacity), or how much data can be transferred at once not necessarily how fast it can reach its destination.Comcast initiated low-latency field trials in mid-2023 and according to the company, those tests met or exceeded expectations. Comcast said the initial rollout began and will expand to cities including Chicago, Atlanta, Colorado Springs, San Francisco, Philadelphia, and Rockville (Maryland) over the coming months. // Related StoriesOnce the tech is fully deployed, it will be available to all Xfinity Internet customers, we are told.Image credit: Alex Shuper, Mika Baumeister0 Comments 0 Shares 131 Views
-
WWW.DIGITALTRENDS.COMThe Ryzen 9 9950X and Asus TUF motherboard has a $100 discountOnce considered to be one of the best processors around, theres a fantastic deal on the AMD Ryzen 9 9950X CPU along with an Asus TUF Gaming X870-Plus motherboard. Right now, you can buy the bundle for $836 at B&H Photo Video, so youre saving $100 off the usual price of $936. A great bundle for anyone keen to upgrade their PC for themselves, the deal is only available for the next two days, so you dont have long to make a decision. Were here to tell you all it has to offer to make that choice easier.We spent some extensive time in our AMD Ryzen 9 9950X review discussing its merits alongside the Ryzen 9 9900X. At the time of launch, it was the flagship of AMDs Zen 5 range and we found it to be far more efficient than Zen 4, with far improved performance in productivity apps over Intel. With 16 cores and 32 threads, on paper its the ideal CPU for all your productivity-based tasks, which is good because its not necessarily the best value choice for gaming (but that changes when on sale). Our full review gives you benchmark figures and its a strong boost compared to older CPUs.We also have a look at the differences between the Ryzen 9 9950X against the Intel Core i9-14900K too. After all, the battle of AMD and Intel continues, so its important to know whats best for your situation, while investing in the best AMD processor for your plans.Besides the CPU, never overlook your motherboard choice. Its vital to pick the right motherboard for your plans, even though its an easy mistake to ignore the motherboards capabilities. With the Asus TUF Gaming X870-Plus, you get four DDR5 slots supporting up to 192GB of memory, a place to install two M.2 PCIe 5.0 SSDs as well as an M.2 2280 PCIe 4.0, and there are two SATA III connectors too. Two USB4 ports on the rear panel mean you can connect 40 Gb/s compatible devices or a USB-C compatible 8K display. There are also three 10 Gb/s compatible USB-A ports, four USB-A Gen 1 ports, and one USB-A 2.0 port. There are some great overclocking features on the motherboard as well. Finally, enjoy Wi-Fi 7 support too.A strong combo of hardware for anyone seeking an upgrade to their PC soon, the AMD Ryzen 9 9950X and Asus TUF Gaming X870-Plus motherboard bundle is down to $836 right now at B&H. Normally costing $936, you save $100 off the regular price and score yourself some great hardware. Check it out now before the deal ends on January 31.0 Comments 0 Shares 156 Views
-
WWW.DIGITALTRENDS.COMThis 65-inch Samsung TV deal drops the price below $400Why should you spend thousands of dollars on a flagship TV when you can score a great deal on a Samsung 65-inch 4K LED? Picture quality is one of the biggest sacrifices you make when opting for a lesser-priced set, but today, Best Buy is offering a fantastic sale on a big Samsung:For a limited time, when you purchase the Samsung 65-inch DU6900 Series 4K LED at Best Buy, youll only wind up paying $380. The full MSRP on this model is $470.If youre looking for a great bright-room TV that delivers strong peak brightness levels, effective anti-glare reduction, and solid SDR performance, the Samsung DU6900 Series is a great choice. Samsungs picture processing and 4K upscaling do a nice job of enhancing whatever sources you feed the TV, so youll get the best visuals regardless of the component youre connecting. And while the refresh rate is capped at 60Hz, Samsungs Motion Xcelerator tech provides improved motion clarity.RelatedThe DU6900 Series is a good TV for modern consoles like PS5 and Xbox, thanks to basic VRR capabilities. Expect low input lag and fast response times. Other noteworthy features include Samsungs Object Tracking Sound Lite for immersive audio, Q-Symphony for linking the TV speakers to a compatible Samsung soundbar, and Tizen OS for all things Netflix, AirPlay, and web-connected.It wont be long before this 65-inch TV is back to full price, so today might be your only shot to score this discount. Take $90 off the Samsung 65-inch DU6900 Series 4K LED when you purchase right now.Want even more home theater ideas? Check out our lists of the best Samsung TV deals, best TV deals, and best soundbar deals for even more awesome markdowns on AV!Editors Recommendations0 Comments 0 Shares 154 Views