• 0 Commentarios 0 Acciones 87 Views
  • WWW.WIRED.COM
    Here’s What Happened to Those SignalGate Messages
    A lawsuit over the Trump administration’s infamous Houthi Signal group chat has revealed what steps departments took to preserve the messages—and how little they actually saved.
    0 Commentarios 0 Acciones 75 Views
  • WWW.COMPUTERWORLD.COM
    DDoS-protection crisis looms as attacks grow
    Every year, distributed denial of service (DDoS) attacks break records for frequency, size, and sophistication, making it imperative that enterprises adopt powerful mitigation measures. To be effective, those measures require network and processing capacity that can handle the flood of requests DDoS attacks generate in their quest to overwhelm corporate servers.  Defense mechanisms need to incorporate detection technology that can quickly distinguish attacks from legitimate, normal spikes in incoming business traffic. And they need to generate reports on DDoS incidents to help businesses plan future security enhancements and to provide data that supports audit and compliance requirements. Given the cost of tools and staffing to meet all these needs, enterprises cannot act alone. Their best strategy is to enlist dedicated DDoS services that unburden corporate security teams from assembling in-house specialized talent and technology to design, install, monitor and maintain the necessary mitigation infrastructure on site. Such services can be both effective and painless to adopt.  Addressing the DDoS problem While DDoS attacks certainly worry CIOs, CTOs and CSOs, they should also be of concern to CFOs and CEOs because of the well-known havoc they can wreak on revenues, productivity, and reputation. Mitigating DDoS attacks requires the ability to process incoming attack volume and sort harmful traffic from legitimate. Because the cost of doing so is just too large for enterprises to bear, they must find trusted service partners with the scale and expertise to handle it for them. These partners monitor inbound traffic and redirect any suspicious activity to scrubbing centers that find the suspect traffic and drop it. Clean traffic gets routed to customer networks. And all that must happen fast, to avoid intrusive delays that disrupt end users. The mitigation partner needs to have the network capacity and processing power to automatically respond to the attacks it detects — before any damage is done, and no matter the scale of the attack.  Ideally, the service can accomplish all this with little or no additional hardware at corporate sites, while complementing in-house security measures already in place.  Optimum offers a DDoS solution One such solution is Managed DDoS Protection offered by Optimum, which is rolled into Optimum Business Internet service. Within a minute of customer-bound traffic showing anomalies that indicate a DDoS attack, all traffic headed to that IP address is off-ramped to Optimum’s scrubbing centers. When it’s been sandboxed and sorted, malicious traffic is dumped and good traffic is routed to the customer, with that extra hop adding just 2ms of latency. Customers go about business as usual, with no knowledge the attack even occurred until they receive an incident report from Optimum detailing the type, size, and duration of the attack — information that can support audit and compliance requirements. The whole process requires no additional hardware, circuits or tunneling configurations.  This type of DDoS protection, especially when integrated with other internet services, can mitigate the threat of downtime without additional investment in staff or hardware. It’s a form of protection that’s too good to ignore.  Learn more about how Optimum can protect you from DDoS attacks. Visit our Business Secure Internet page. 
    0 Commentarios 0 Acciones 96 Views
  • APPLEINSIDER.COM
    Apple about to launch accessory discount with in-store recycling promotion
    Apple will soon encourage customers to bring their old gear in to an Apple Store that wouldn't qualify for store credit to get a discount on AirPods, AirTags, and more.Image Credit: AppleIt's spring, which means it's time to declutter our homes and discard unwanted items. Apple thinks so, too, and is planning a promotion to help you turn your old and unwanted Apple tech into credit for new Apple accessories.Subscribe to AppleInsider on YouTube Continue Reading on AppleInsider | Discuss on our Forums
    0 Commentarios 0 Acciones 73 Views
  • ARCHINECT.COM
    Carlo Ratti's 2026 Olympics torch design unveiled in Milan and Osaka
    Carlo Ratti has unveiled his design for the torches for next year’s Olympic and Paralympic Winter Games in Milano.& The instruments to be used in ceremonies seen across the world are made from either bronze or aluminum, weigh close to 1.06kg (or 2.3 pounds), and can be reused up to ten times by the respective torchbearers in each relay.  Image courtesy Fondazione Milano Cortina 2026Ratti, who is busy preparing the 2025 Venice Biennale, has chosen to name these torches Essential and says their inspiration has concentrated more on enhancing the ancient symbol of the games than being presented as a singular design object.  Image courtesy Fondazione Milano Cortina 2026"We were clear from the very beginning: it’s not the torch that matters, but the flame. So, we started thinking about how to design a torch that, in a way, isn’t a torch – and instead emphasizes the power and beauty of the flame," he reiterates in the design announcement. Image courtesy Fondazione Milano Cortina 2026As a ...
    0 Commentarios 0 Acciones 106 Views
  • GAMINGBOLT.COM
    Destiny 2: The Edge of Fate Reveal Announced for May 6th
    Marathon is out later this year for Bungie, but that’s not all it has in the works. On May 6th, 9 AM PT, it will reveal Destiny 2’s The Edge of Fate and further details for the upcoming year of content. The Edge of Fate could be the name of the upcoming expansion codenamed Apollo. Launching in mid-June, it includes new stories and locations (some brand new to the franchise), a new raid, the start of the next saga for Destiny 2, and the ability to choose your own story path. The season consists of two major updates, Arsenal and Surge, one every three months adding new and reprised activities, gear and Artifact Mods, events, new raid weapons and more. Each will have a Rewards Pass containing a new Exotic weapon and ornament, cosmetics, Legendary gear ornaments, and resources. Of course, we’re yet to know what all this could be, so stay tuned for more details in May. Destiny 2 is available for Xbox One, Xbox Series X/S, PS4, PS5, and PC. It’s currently in the middle of Act 3 of its third Episode, Heresy. Join us on May 6, 2025 as we reveal The Edge of Fate and the upcoming year of Destiny 2.— Destiny 2 (@destinythegame.bungie.net) 2025-04-15T15:00:00.000Z
    0 Commentarios 0 Acciones 73 Views
  • WWW.SMITHSONIANMAG.COM
    High School Student Discovers 1.5 Million Potential New Astronomical Objects by Developing an A.I. Algorithm
    High School Student Discovers 1.5 Million Potential New Astronomical Objects by Developing an A.I. Algorithm The 18-year-old won $250,000 for training a machine learning model to analyze understudied data from NASA’s retired NEOWISE telescope Matteo Paz with Caltech President Thomas F. Rosenbaum after winning the Regeneron Science Talent Search award. California Institute of Technology In a leap forward for astronomy, a researcher has developed an artificial intelligence algorithm and discovered more than one million objects in space by parsing through understudied data from a NASA telescope. The breakthrough is detailed in a study published in November in The Astronomical Journal. What the study doesn’t detail, however, is that the paper’s sole author is 18 years old. Matteo Paz from Pasadena, California, recently won the first place prize of $250,000 in the 2025 Regeneron Science Talent Search for combining machine learning with astronomy. Self-described as the nation’s “oldest and most prestigious science and math competition for high school seniors,” the contest recognized Paz for developing his A.I. algorithm. The young scientist’s tool processed 200 billion data entries from NASA’s now-retired Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) telescope. His model revealed 1.5 million previously unknown potential celestial bodies. “I was just happy to have had the privilege. Not only placing in the top ten, but winning first place, came as a visceral surprise,” the teenager tells Forbes’ Kevin Anderton. “It still hasn’t fully sunk in.” Pasadena high schooler earns top science award Watch on Paz’s interest in astronomy turned into real research when he participated in the Planet Finder Academy at the California Institute of Technology (Caltech) in summer 2022. There, he studied astronomy and computer science under the guidance of his mentor, Davy Kirkpatrick, an astronomer and senior scientist at the university’s Infrared Processing and Analysis Center (IPAC). Kirkpatrick had been working with data from the NEOWISE infrared telescope, which NASA launched in 2009 with the aim of searching for near-Earth asteroids and comets. The telescope’s survey, however, also collected data on the shifting heat of variable objects: rare phenomena that emit flashing, changing or otherwise dynamic light, such as exploding stars. It was Kirkpatrick’s idea to look for these elusive objects in NEOWISE’s understudied data. “At that point, we were creeping up towards 200 billion rows in the table of every single [NEOWISE] detection that we had made over the course of over a decade,” Kirkpatrick explains in a Caltech statement. “So, my idea for the summer was to take a little piece of the sky and see if we could find some variable stars. Then we could highlight those to the astronomic community, saying, ‘Here’s some new stuff we discovered by hand; just imagine what the potential is in the dataset.’” Paz, however, had no intention of doing it by hand. Instead, he worked on an A.I. model that sorted through the raw data in search of tiny changes in infrared radiation, which could indicate the presence of variable objects. Paz and Kirkpatrick continued working together after the summer to perfect the model, which ultimately flagged 1.5 million potential new objects, including supernovas and black holes. “Prior to Matteo’s work, no one had tried to use the entire (200-billion-row) table to identify and classify all of the significant variability that was there,” Kirkpatrick tells Business Insider’s Morgan McFall-Johnsen in an email. He adds that Caltech researchers are already making use of Paz’s catalog of potential variable objects, called VarWISE, to study binary star systems. “The variable candidates that he’s uncovered will be widely studied,” says Amy Mainzer, NEOWISE’s principal investigator for NASA, to Business Insider. As for the A.I. model, Paz explains that it might be applicable to “anything else that comes in a temporal format,” such as stock market chart analysis and atmospheric effects like pollution, according to the statement. It’s no surprise the teenager is interested in the climate—as fires burned in L.A. earlier this year, the Eaton Fire forced him and his family to evacuate their home, Forbes Other teenage scientists recognized by the contest studied mosquito control, drug-resistant fungus, the human genome and mathematics. “The remarkable creativity and dedication of these students bring renewed hope for our future,” Maya Ajmera, president of the Society for Science, which oversees the award, says in a statement. “Driven by their ingenuity, these young scientists are developing groundbreaking solutions that have the potential to transform our world and propel society forward.” Get the latest stories in your inbox every weekday.
    0 Commentarios 0 Acciones 109 Views
  • VENTUREBEAT.COM
    Sam Altman at TED 2025: Inside the most uncomfortable — and important — AI interview of the year
    At TED 2025, OpenAI CEO Sam Altman faced tough questions on AI ethics, artist compensation, and the risks of autonomous agents in a tense interview with TED’s Chris Anderson, revealing new details about OpenAI’s explosive growth and future plans.Read More
    0 Commentarios 0 Acciones 91 Views
  • WWW.THEVERGE.COM
    ChatGPT now has a section for your AI-generated images
    OpenAI is adding an image library to ChatGPT to make it easier to access your AI-generated images, the company announced today. It’s rolling out to all Free, Plus, and Pro users on mobile and on the web. In a short video, OpenAI shows how it works. From the ChatGPT sidebar, you’ll be able to see a new “Library” section. Tap into it and you can see a grid of images that you’ve created. The video also briefly shows a button hovering at the bottom of the screen to make a new image. All of your image creations, all in one place.Introducing the new library for your ChatGPT image creations—rolling out now to all Free, Plus, and Pro users on mobile and https://t.co/nYW5KO1aIg. pic.twitter.com/ADWuf5fPbj— OpenAI (@OpenAI) April 15, 2025 I already have the library available in the ChatGPT iOS app, and it works like OpenAI’s video shows. I don’t seem to have it yet on the web, but I would guess it will roll out there in a matter of time. The feature seems like it could be useful if you use ChatGPT to make a lot of images. Or if you just want to look back on your Studio Ghibli-inspired art or your really dull dolls.
    0 Commentarios 0 Acciones 71 Views
  • WWW.MARKTECHPOST.COM
    LLM Reasoning Benchmarks are Statistically Fragile: New Study Shows Reinforcement Learning RL Gains often Fall within Random Variance
    Reasoning capabilities have become central to advancements in large language models, crucial in leading AI systems developed by major research labs. Despite a surge in research focused on understanding and enhancing LLM reasoning abilities, significant methodological challenges persist in evaluating these capabilities accurately. The field faces growing concerns regarding evaluation rigor as non-reproducible or inconclusive assessments risk distorting scientific understanding, misguiding adoption decisions, and skewing future research priorities. In the rapidly evolving landscape of LLM reasoning, where quick publication cycles and benchmarking competitions are commonplace, methodological shortcuts can silently undermine genuine progress. While reproducibility issues in LLM evaluations have been documented, their continued presence—particularly in reasoning tasks—demands heightened scrutiny and more stringent evaluation standards to ensure that reported advances reflect genuine capabilities rather than artifacts of flawed assessment methodologies. Numerous approaches have emerged to enhance reasoning capabilities in language models, with supervised fine-tuning (SFT) and reinforcement learning (RL) being the primary methods of interest. Recent innovations have expanded upon the DeepSeek-R1 recipe through innovative RL algorithms like LCPO, REINFORCE++, DAPO, and VinePPO. Researchers have also conducted empirical studies exploring RL design spaces, data scaling trends, curricula, and reward mechanisms. Despite these advancements, the field faces significant evaluation challenges. Machine learning progress often lacks rigorous assessment, with many reported gains failing to hold up when tested against well-tuned baselines. RL algorithms are particularly susceptible to variations in implementation details, including random seeds, raising concerns about the reliability of benchmarking practices. Motivated by inconsistent claims in reasoning research, this study by researchers from Tübingen AI Center, University of Tübingen and  University of Cambridge conducts a rigorous investigation into mathematical reasoning benchmarks, revealing that many recent empirical conclusions fail under careful re-evaluation. The analysis identifies surprising sensitivity in LLM reasoning pipelines to minor design choices, including decoding parameters, prompt formatting, random seeds, and hardware configurations. Small benchmark sizes contribute significantly to this instability, with single questions potentially shifting Pass@1 scores by over 3 percentage points on datasets like AIME’24 and AMC’23. This leads to double-digit performance variations across seeds, undermining published results. The study systematically analyzes these instability sources and proposes best practices for improving reproducibility and rigor in reasoning evaluations, providing a standardized framework for re-evaluating recent techniques under more controlled conditions. The study explores design factors affecting reasoning performance in language models through a standardized experimental framework. Nine widely used models across 1.5B and 7B parameter classes were evaluated, including DeepSeek-R1-Distill variants, DeepScaleR-1.5B, II-1.5 B-Preview, OpenRS models, S1.1-7B, and OpenThinker7B. Using consistent hardware (A100 GPU, AMD CPU) and software configurations, models were benchmarked on AIME’24, AMC’23, and MATH500 datasets using Pass@1 metrics. The analysis revealed significant performance variance across random seeds, with standard deviations ranging from 5 to 15 percentage points. This instability is particularly pronounced in smaller datasets where a single question can shift performance by 2.5-3.3 percentage points, making single-seed evaluations unreliable. Based on rigorous standardized evaluations, the study reveals several key findings about current reasoning methodologies in language models. Most RL-trained variants of the DeepSeek R1-Distill model fail to deliver meaningful performance improvements, with only DeepScaleR demonstrating robust, significant gains across benchmarks. While RL training can substantially improve base model performance when applied to models like Qwen2.5, instruction tuning generally remains superior, with Open Reasoner-Zero-7B being the notable exception. In contrast, SFT consistently outperforms instruction-tuned baselines across all benchmarks and generalizes well to new datasets like AIME’25, highlighting its robustness as a training paradigm. RL-trained models show pronounced performance drops between AIME’24 and the more challenging AIME’25, indicating problematic overfitting to training distributions. Additional phenomena investigated include the correlation between response length and accuracy, with longer responses consistently showing higher error rates across all model types. This comprehensive analysis reveals that apparent progress in LLM-based reasoning has been built on unstable foundations, with performance metrics susceptible to minor variations in evaluation protocols. The investigation demonstrates that reinforcement learning approaches yield modest improvements at best and frequently exhibit overfitting to specific benchmarks, while supervised fine-tuning consistently delivers robust, generalizable performance gains. To establish more reliable assessment standards, standardized evaluation frameworks with Dockerized environments, seed-averaged metrics, and transparent protocols are essential. These findings highlight the critical need for methodological rigor over leaderboard competition to ensure that claimed advances in reasoning capabilities reflect genuine progress rather than artifacts of inconsistent evaluation practices. Here is the Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Multimodal Models Don’t Need Late Fusion: Apple Researchers Show Early-Fusion Architectures are more Scalable, Efficient, and Modality-AgnosticMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Step by Step Coding Guide to Build a Neural Collaborative Filtering (NCF) Recommendation System with PyTorchMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models)Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/MMSearch-R1: End-to-End Reinforcement Learning for Active Image Search in LMMs
    0 Commentarios 0 Acciones 87 Views