• Delta Air Lines plane crash update: What we know about the flipped landing at Torontos airport
    www.fastcompany.com
    A passenger jet flipped onto its roof while landing in Toronto, Canada, the fourthmajor aviation accidentin North America in the past three weeks.While at least 18 people were injured, all 80 people on board the Delta Air Lines flight from Minneapolis survived the crash Monday.Here are some things to know about the crash:What caused the airplane to flip?Communications between the tower at Torontos Pearson International Airport and the pilot were normal on approach and right now its not clear what went wrong when the plane touched down.Were strong winds a factor in the crash?Toronto Pearson Fire Chief Todd Aitken has said the runway was dry and there was no crosswind conditions.Audio recordings indicate that the control tower warned the pilots of a possible air flow bump on the approach. Winds were gusting up to 40 mph (65 kph) during the day at the airport, according to the Meteorological Service of Canada.But airplanes and pilots should be equipped to handle those kind of winds while landing, said John Cox, CEO of aviation safety consulting firm Safety Operating Systems in Florida.Were the passengers badly injured?Those hurt had relatively minor injuries, the airports chief executive said.The airport fire chief said 18 passengers were taken to the hospital. An air ambulance operator said it had transported one pediatric patient and two adults to hospitals. Delta said Tuesday that some of those injured had been released.What happened inside the plane?One passenger told toldCBC Newsthat he found himself upside down and still strapped in his seat after a forceful landing.Peter Carlson said he crashed onto the ceiling when he took off his seat belt and smelled gas. He and another man helped a mother and her young son out of the plane before getting out.Who is investigating?The U.S. Federal Aviation Administration said the Transportation Safety Board of Canada will lead the investigation. The National Transportation Safety Board in the U.S. said it was sending a team to assist.Is it safe to fly?The fourth major aviation accident in North America in less than a month has many people concerned aboutthe safety of flying. Fatal crashes remain rare and the track record of U.S. airlines is remarkably safe.But there have beendeadly crashes recentlyaround the world and U.S. officialshave been raising concernsabout an overtaxed and understaffed air traffic control system for years.By JOHN SEEWER Associated PressAssociated Press writers John Wawrow and Michael Casey contributed to this report.
    0 Comments ·0 Shares ·61 Views
  • Marvel Rivals US Dev Team Has Been Laid off
    gamingbolt.com
    A hero shooter likeConcordcan come along and completely fail to grab attention, to the extent of an abrupt and painful demise. On the other hand, you have something likeMarvel Rivals,which launched in December to widespread acclaim and indisputable success, having amassed over 20 million players in under two weeks. Even its developers arent immune to the industry-wide plague of layoffs, however.Taking to LinkedIn, game director Thaddeus Sasser has revealed thatMarvel RivalsSeattle-based development team which works alongside the games core dev team in China, led by creative director Guangyun Chen has been laid off.This is such a weird industry, Sasser wrote. My stellar, talented team just helped deliver an incredibly successful new franchise in Marvel Rivals for NetEase Games and were just laid off!Level designer Jack Burrows also confirmed the layoffs on his own LinkedIn profile. Burrows wrote, Welp, just got laid off from my job working on Marvel Rivals with NetEase. Was an enormous pleasure to work with my American coworkers who join me in this sad culling. Just couldnt dodge that big boot I guess, no matter how big the success of the gig.NetEase Games has made no official statement yet about the layoffs, but its fair to say the news comes as a huge surprise.Marvel Rivalshas consistently enjoyed high player numbers since its December release, and following the launch of Season 1 last month, the hero shooter peaked at over 644,000 concurrent players on Steam.
    0 Comments ·0 Shares ·31 Views
  • A New Tony Hawk Remaster is Coming, Professional Skater Claims
    gamingbolt.com
    Between Vicarious Visions being merged into Blizzard andTony Hawks Pro Skater 3+4having been reportedly cancelled, the future wasnt exactly looking bright for Activisions beloved skating series. As luck would have it, however, it seems its getting asecondsecond chance.A newTony Hawks Pro Skaterremaster is in development, professional skater Tyshawn Jones has claimed. Jones, who was featured in Tony Hawks Pro Skater 1+2sroster, recently appeared on the Breakfast Club podcast, where he claimed that a newTony Hawkremaster is in the pipeline. Apparently, Jones will be included in the upcoming titles roster as well.Im in a Tony Hawk coming out, thats cool, he said (via VGC). They got a new one they remastering so thats about to come out, I was in the last one.This isnt the first time weve seen hints of theTony Hawkfranchise returning from the dead (again). Tony Hawk himself said in September that he was working with Activision on something fans of the series would appreciate. Later that same month, he assured that there will be a future for the franchise.Activision or Microsoft have yet to make an official announcement regardingTony Hawk. Stay tuned for more updates in the coming weeks and months.
    0 Comments ·0 Shares ·30 Views
  • Microsoft AI Releases OmniParser V2: An AI Tool that Turns Any LLM into a Computer Use Agent
    www.marktechpost.com
    In the realm of artificial intelligence, enabling Large Language Models (LLMs) to navigate and interact with graphical user interfaces (GUIs) has been a notable challenge. While LLMs are adept at processing textual data, they often encounter difficulties when interpreting visual elements like icons, buttons, and menus. This limitation restricts their effectiveness in tasks that require seamless interaction with software interfaces, which are predominantly visual.To address this issue, Microsoft has introduced OmniParser V2, a tool designed to enhance the GUI comprehension capabilities of LLMs. OmniParser V2 converts UI screenshots into structured, machine-readable data, enabling LLMs to understand and interact with various software interfaces more effectively. This development aims to bridge the gap between textual and visual data processing, facilitating more comprehensive AI applications.OmniParser V2 operates through two main components: detection and captioning. The detection module employs a fine-tuned version of the YOLOv8 model to identify interactive elements within a screenshot, such as buttons and icons. Simultaneously, the captioning module uses a fine-tuned Florence-2 base model to generate descriptive labels for these elements, providing context about their functions within the interface. This combined approach allows LLMs to construct a detailed understanding of the GUI, which is essential for accurate interaction and task execution.A significant improvement in OmniParser V2 is the enhancement of its training datasets. The tool has been trained on a more extensive and refined set of icon captioning and grounding data, sourced from widely used web pages and applications. This enriched dataset enhances the models accuracy in detecting and describing smaller interactive elements, which are crucial for effective GUI interaction. Additionally, by optimizing the image size processed by the icon caption model, OmniParser V2 achieves a 60% reduction in latency compared to its previous version, with an average processing time of 0.6 seconds per frame on an A100 GPU and 0.8 seconds on a single RTX 4090 GPU.The effectiveness of OmniParser V2 is demonstrated through its performance on the ScreenSpot Pro benchmark, an evaluation framework for GUI grounding capabilities. When combined with GPT-4o, OmniParser V2 achieved an average accuracy of 39.6%, a notable increase from GPT-4os baseline score of 0.8%. This improvement highlights the tools ability to enable LLMs to accurately interpret and interact with complex GUIs, even those with high-resolution displays and small target icons.To support integration and experimentation, Microsoft has developed OmniTool, a dockerized Windows system that incorporates OmniParser V2 along with essential tools for agent development. OmniTool is compatible with various state-of-the-art LLMs, including OpenAIs 4o/o1/o3-mini, DeepSeeks R1, Qwens 2.5VL, and Anthropics Sonnet. This flexibility allows developers to utilize OmniParser V2 across different models and applications, simplifying the creation of vision-based GUI agents.In summary, OmniParser V2 represents a meaningful advancement in integrating LLMs with graphical user interfaces. By converting UI screenshots into structured data, it enables LLMs to comprehend and interact with software interfaces more effectively. The technical enhancements in detection accuracy, latency reduction, and benchmark performance make OmniParser V2 a valuable tool for developers aiming to create intelligent agents capable of navigating and manipulating GUIs autonomously. As AI continues to evolve, tools like OmniParser V2 are essential in bridging the gap between textual and visual data processing, leading to more intuitive and capable AI systems.Check outtheTechnical Details, Model on HF and GitHub Page.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our75k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Enhancing Diffusion Models: The Role of Sparsity and Regularization in Efficient Generative AISana Hassanhttps://www.marktechpost.com/author/sana-hassan/Rethinking AI Safety: Balancing Existential Risks and Practical ChallengesSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Nous Research Released DeepHermes 3 Preview: A Llama-3-8B Based Model Combining Deep Reasoning, Advanced Function Calling, and Seamless Conversational IntelligenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Layer Parallelism: Enhancing LLM Inference Efficiency Through Parallel Execution of Transformer Layers
    0 Comments ·0 Shares ·26 Views
  • DeepSeek in My Engineers Eyes
    towardsai.net
    LatestMachine LearningDeepSeek in My Engineers Eyes 1 like February 18, 2025Share this postAuthor(s): Kelvin Lu Originally published on Towards AI. Photo by ThisisEngineering on UnsplashIts been almost a year since my last post June of last year, to be exact. The reason? I simply didnt come across anything I felt was exciting enough to share. Dont get me wrong; this isnt to say there hasnt been progress in AI, or that my past six months have been unproductive. On the contrary, there have been significant advancements in the field, and my own work has been quite fruitful.That said, Ive noticed a growing disconnect between cutting-edge AI development and the realities of AI application developers. Take, for example, the U.S. governments $500 billion investment in the Stargate project. While its an ambitious endeavour, does it really matter to most of us what specific technologies will be used? If this is the direction AI is heading, it feels like the forefront of innovation will increasingly become the domain of just two players: the U.S. and China. For the rest of the world, it doesnt matter whether you are an interested individual, company, or country; you just dont have the opportunity.Then there was the application-level technology, like RAG and AI agents. RAG, while useful, is ultimately a design pattern not a complete, out-of-the-box solution. Because of its lack of reasoning capability, it is a pretty dumb solution. AI agents, on the other hand, hold a lot of promise but are still constrained by the reliability of LLM reasoning. From an engineering perspective, the core challenge for both lies in improving accuracy and reliability to meet real-world business requirements. Building a demo is one thing; scaling it to production is an entirely different beast.Everything changed when Deepseek burst onto the scene a month ago. My experience felt like driving down a long, boring stretch of road at night. My eyes were half-closed, lulled by the dull hum of the engine. Then, out of nowhere, a roaring race car sped past me, kicking up a cloud of dust as it vanished into the distance in mere seconds. I sat there, wide-eyed, jaw dropped, staring at the haze left in its wake. That moment was a month ago, yet the shockwaves of that encounter still echo in my mind.Deepseek has disrupted the world in countless ways. Some have labeled it a national security threat, copier, gate-tailer, data theft, distiller, etc. I dismiss these claims entirely. In the boxing ring, emotions can cloud judgment. If you turn emotional, you are already lost. When Tyson bit Holyfields ear in front of billions of TV viewers, it was a moment of weakness, not strength.In this post, I want to shift the conversation to how Deepseek is redefining the future of machine learning engineering. It has already inspired me to set new goals for 2025, and I hope it can do the same for other ML engineers. Lets explore what this means for our field and how we can rise to the challenge.AI Growth Pattern RedefinedFor a long time, it has been widely assumed that AI development is governed strictly by the scaling law the idea that model performance improves with exponentially larger datasets and greater computational resources. This belief has not only created barriers for application developers but also raised serious questions about the sustainability of AI progress. When the U.S. government deems it necessary to invest $500 billion in the next generation of AI, one has to wonder: What is the roadmap for generating a positive return on such an investment? And what will be the cost of Stargate version 2? $5 trillion? That is the annual budget income for the US federal government! Ironically, the Stargates roadmap towards AGI is via bruit force, unintelligent at all.Consider OpenAI, the leading player in the field, is still far from breaking even. The skyrocketing costs of training large language models are beginning to resemble a Ponzi scheme, where the promise of future returns solely justified the ever-increasing expenditures. This raises concerns about the long-term viability of such an approach and whether the AI industry is heading toward a financial reckoning.Really? AI Revolution is Losing Steam?Concerns of sustanability of current AI development and prediction of the future AIpub.towardsai.netDeepseeks practice indicates that when computing power reaches a certain scale, further increasing it will have a diminishing effect on model performance. With its more than a dozen optimisation and noval algorithms, it was able to achieve same or even better performance with a fraction of the cost and resources of other leading LLM. Some analysts call this a turning point of computation starvation.It reads: Deepseek has significantly improved computation efficiency by optimising algorithm and design, challenging the traditional notion that computing power is the ultimate determinant.The most important encouragement I got from Deepseek was that the formidable large training datasets are not an insurmountable barrier, and expensive hardware is not a hard limit. With the right skills, determination, and a bold heart, we can conquer them all.ML Engineering RedefinedUnlike most LLM technical reports that only experimented a small number of new algorithms, Deepseek very generously presented a long list of new development:128K-1M tokens long context windowMLA,MOE load balancing,GRPO,HAI their self-built super efficient training platform,Mixed precision training,Multi-token prediction,Decoupled Rotary Position Embedding,First use of RL in LLM training,First use of PTX, the assembly language in GPU programming, in model training.Does these looks like Deepseek has copied from other leading company? I think they are time-travelled back from 10 years future.It is fascinating what Deepseek has achieved with their top noche engineering skill. They also inspired a bunch of new potentials for ML engineers.New Standard of Data qualityDeepseek has made significant strides in understanding the role of training data quality in AI model development. Their research highlights that high-quality data is more impactful than sheer volume, as noisy or biased data can undermine model performance despite extensive computational resources. To address this, Deepseek employs rigorous data filtering and deduplication, ensuring only relevant and accurate data is used. They also focus on bias mitigation, using techniques like data augmentation, synthetic data generation, and balanced sampling to create diverse, representative datasets.Deepseek advocates for a data-centric approach, prioritising data quality over model architecture improvements. They have developed tools for automated data cleaning, label validation, and error analysis, enabling efficient identification and correction of data issues. Their experiments show that curated datasets lead to more robust and reliable models, even with smaller data sizes, challenging the traditional emphasis on scaling data volume.New Possibility Provided by the Mixed Precision ModelLow precision deployment is not new. The most common way is to deploy a LLM that is trained in full precision in low precision mode. The drawback is, the low precision deployment has lower accuracy than the full precision deployment.Deepseeks mixed precision architecture is a groundbreaking innovation that optimises AI model training and inference by combining different numerical precisions. This approach delivers significant benefits for both model performance and downstream application development. By using lower precision, mostly FP-8, for most calculations, Deepseek reduces memory usage and computational load, enabling faster training and inference while maintaining model accuracy. Strategic use of higher precision for critical operations ensures that model performance remains robust and reliable. Thus, it hits the balance of efficiency and accuracy.Most LLM are released in FP-32, and developers have to deploy them either to a larger profile environment or to a low profile environment using technology called quantisation. Deepseek models are released in FP-8, that means a 7b Deepseek model can be deployed to a consumer grade GPU without performance degradation. That enables developers to experiment with lower budget, faster inference speeds for real-time applications, or high throughput via reasonable larger cluster.Incredible RL-based Fine TuningThe novel utilisation of RL-based fine tuning is another breakthrough.Traditionally, techniques such as Supervised Fine-Tuning (SFT) played a crucial role in improving model performance and domain knowledge adaption. SFT involves training a pre-trained model further on task-specific labeled datasets to refine its outputs. While effective in many applications, SFT inherently relies on a brute-force method more data, longer training times, and greater computational demands. Despite its benefits, SFT follows a pattern of diminishing returns, where merely increasing computational resources and data does not proportionally enhance performance. Let along the difficulty of collecting task-specific labeled data.Unlike traditional fine-tuning methods that rely on static datasets, RL-based fine-tuning leverages dynamic feedback loops to refine model behavior, making it particularly powerful for complex, real-world applications. To be specific, it offers the following benefits:Dynamic AdaptationRL-based fine-tuning allows models to learn from real-time feedback, enabling them to adapt to changing environments and user needs. This is especially valuable in applications like recommendation systems and autonomous systems, where conditions are constantly evolving.Task-Specific OptimizationBy defining specific reward functions, developers can guide models to optimize for particular objectives, such as maximizing user engagement, minimizing errors, or improving efficiency. This targeted approach ensures that models perform exceptionally well in their intended tasks.Handling Complex ScenariosRL excels in environments with sparse or delayed rewards, making it ideal for fine-tuning models in complex scenarios where traditional supervised learning struggles. For example, in robotics or strategic games, RL-based fine-tuning enables models to learn nuanced strategies over time.Continuous ImprovementUnlike one-time fine-tuning, RL-based methods enable continuous learning. Models can iteratively improve their performance as they interact with new data and environments, ensuring long-term relevance and accuracy.RAG has been widely recognised as a significant advancement in Generative AI technology. However, its lack of reasoning capabilities limits its ability to handle complex queries effectively. Similarly, Agentic development also relies on high accuracy, tunable reasoning LLM. This is where Deepseek, with its robust reasoning capabilities, comes into play as an ideal complement. I envision a future where reasoning models like Deepseek seamlessly integrate with RAG and agent to tackle more sophisticated tasks with advanced reasoning.Disadvantages of RAGThis is the first part of the RAG analysis:medium.comOne feature I particularly admire is RL-based FTs ability to continuously improve. This is a critical gap in the current GenAI development, as it lacks mechanisms for ongoing enhancement. From an application developers perspective, continuous improvement is essential for scaling a proof-of-concept into a fully-fledged product. Deepseeks approach not only addresses this need but also sets a new standard for building adaptable and scalable AI solutions.High-Performance Team RedefinedHow Deepseek has managed to catch up to and even surpass OpenAIs top-performing models is phenomenal. What makes this even more astonishing is the contrast in team size: Deepseek operates with just 136 employees, compared to OpenAIs 3,500. This isnt an isolated case, either. History is filled with examples of small, nimble unicorn companies achieving extraordinary success against all odds:When Eric Schmidt stepped up as Googles CEO in 2001, the company had fewer than 300 employees.Amazon, founded even earlier, had only 158 employees on the eve of its IPO in 1997.When WhatsApp was acquired for $19 billion in 2014, it had only 50 employees.When Instagram was sold for $1 billion in 2012, it had only 13 employees.There is one thing we can be sure of: successful innovation requires a chain reaction of creativity within the team and a stroke of good fortune. But why are they often unable to maintain the initial momentum as they grow larger? And why do so many big companies fail, despite their ability to offer top salaries, attract the brightest talent, and access far greater resources?These questions have sparked many fascinating discussions. Id like to share a lesson I learned from my mentor at the start of my consulting career:Larger corporations tend to have a lower collective IQ.This may seem aggressive or even offensive, but its not what you might think. After a little refinement, the concept could serve as the icebreaker for management consulting. While large companies often employ more intelligent individuals, their complex structures slow down the flow of information and knowledge, hinder cooperation, and make them less responsive to market and technical trends. This is what it meant by a low IQ enterprise.Wenfeng Liang, the CEO of Deepseek, shared in an interview that his company has a self-organizing team. When a young engineer proposes a new idea about optimal model structure, a team automatically forms around him. The outcome was the very successful MLA: Multi-head Latent Attention. He also mentioned that in his company, the main meeting room always has its door wide open, allowing anyone passing by to join the discussion if they have an idea. Does this sound like your company?Thats the difference in enterprise IQ.Dont be discouraged if your company isnt like this. Indeed, the top performing team are rare. Most companies are not designed to stimulate chain-reaction in the team. That is hard to achieve in small group, and impossible in large companies. Based on our discussion, its clear that Deepseeks remarkable success as a small company isnt exceptional. When it grows ten times larger, who knows, it could very well become just another average company.Top-performing teams are always rare, as rare as one can partner with an Olympic medalist. If youre lucky enough to be part of one, dont leave for any trivial temptations. You may never get another chance in your life to truly pour your heart into your work with such joy.Photo by Allen Taylor on UnsplashParting WordsDeepseek is a milestone indicating that Generative AI is at a pivotal turning point, transitioning to a fundamentally different style of development and deployment. While the previous technologies in the engineers toolbox are RAG or Agent, the design and engineering of large language models are now more accessible than ever, enabling a seamless integration of capabilities that were previously siloed. This shift has made LLM tuning and training significantly more available to application project teams, empowering them to tailor models to specific use cases. As a result, the barrier to entry for leveraging cutting-edge AI technologies has lowered, opening up new opportunities for innovation across industries.Looking ahead to 2025, my focus will be on diving deeper into Reinforcement Learning, a critical capability for the next generation LLM fine tuning and application building. Additionally, I plan to get hands-on with custom LLM tuning, data preparation, and hosting, ensuring I can build and deploy models that are both powerful and performant. By mastering these skills, I aim to getting prepared for the next wave of AI-driven solutions.Photo by NICO BHLR on UnsplashJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comments ·0 Shares ·30 Views
  • CISA Adds Palo Alto Networks and SonicWall Flaws to Exploited Vulnerabilities List
    thehackernews.com
    Feb 19, 2025Ravie LakshmananThreat Intelligence / VulnerabilityThe U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added two security flaws impacting Palo Alto Networks PAN-OS and SonicWall SonicOS SSLVPN to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation.The flaws are listed below -CVE-2025-0108 (CVSS score: 7.8) - An authentication bypass vulnerability in the Palo Alto Networks PAN-OS management web interface that allows an unauthenticated attacker with network access to the management web interface to bypass the authentication normally required and invoke certain PHP scriptsCVE-2024-53704 (CVSS score: 8.2) - An improper authentication vulnerability in the SSLVPN authentication mechanism that allows a remote attacker to bypass authenticationPalo Alto Networks has since confirmed to The Hacker News that it has observed active exploitation attempts against CVE-2025-0108, with the company noting that it could be chained with other vulnerabilities like CVE-2024-9474 to allow unauthorized access to unpatched and unsecured firewalls."Palo Alto Networks has observed exploit attempts chaining CVE-2025-0108 with CVE-2024-9474 and CVE-2025-0111 on unpatched and unsecured PAN-OS web management interfaces," it said in an updated advisory.Threat intelligence firm GreyNoise said as many as 25 malicious IP addresses are actively exploiting CVE-2025-0108, with the volume of attacker activity surging 10 times since it was detected nearly a week ago. The top three sources of attack traffic are the United States, Germany, and the Netherlands.As for CVE-2024-53704, cybersecurity company Arctic Wolf revealed that threat actors are weaponizing the flaw shortly after a proof-of-concept (PoC) was made available by Bishop Fox.In light of active exploitation, Federal Civilian Executive Branch (FCEB) agencies are required to remediate the identified vulnerabilities by March 11, 2025, to secure their networks.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Comments ·0 Shares ·29 Views
  • Best Moving Companies of 2025
    www.cnet.com
    Our Experts Written by Joe Supan Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement Why You Can Trust CNET 16171819202122232425+ Years of Experience 14151617181920212223 Hands-on Product Reviewers 6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000 Sq. Feet of Lab Space How we test CNETs expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.Table of Contents Our Picks Best overall moving company National Van Lines View details See at National Van Lines View details Another great option for long-distance moves JK Moving Services View details See at JK Moving Services View details Best for local moves North American Van Lines View details See at North American Van Lines View details Best international moving company Interstate International View details See at Interstate Moving View details Best moving container PODS View details See at PODS View details Best truck rental Penske View details See at Penske View details Table of Contents Moving is always stressful and without the right help, it can easily become overwhelming. Getting a truck and loading it up yourself is always an option, however, hiring a moving company will make the process a lot easier. When you pick from the best moving companies, they will pack, unpack and even drive long distances to help you with your move. These are the companies you can trust will handle all your belongings with care.Read more: Best Smart Home GiftsProfessional movers cost more than DIY options, but they offer efficiency and expertise. They'll handle the heavy lifting, complex logistics and proper packing techniques that come from daily experience (compared to the average of12 timesmost people move in their lifetime). To make your moving process simple, we've rounded up the best moving companies operating in 2025 below.Best moving companiesThis best list draws on government sources like the General Services Administration's moving services score and the Department of Transportation's safety ratings, as well as online customer reviews and firsthand experience getting estimates for a variety of moves. A good rule of thumb is to compare at least three moving quotes before picking a moving company. Each one should ask for a detailed inventory of your home before offering an estimate -- this is a great way to judge their moving process. See at National Van Lines National Van Lines checks every box I looked for. It has a stellar 103 rating from the GSA, an excellent safety track record with the DOT and impressively positive customer reviews. But more than that, it felt like a company I'd trust with all my belongings.It only took about 90 seconds to navigate its phone tree and get a human on the line. Right off the bat, the operator felt more like an advisor than a talking sales pitch. She was ready to set up a virtual home walkthrough immediately and she answered all of my questions with ease. See at JK Moving Services JK Moving Services specializes in corporate, long-distance and international moves, and it has a history of handling complicated moves safely and on-time. It received a 104 score from the GSA and a 3.58/5 in customer reviews with the Better Business Bureau. JK also has the lowest rate I saw of trucks and drivers being delayed because of failed inspections.So what kept JK out of the top spot? The quotes I got for long-distance moves were as much as $5,000 higher than other companies. It's still worth going through the quote process, but if you want to keep costs low, you might find that JK isn't your best option. See at North American Van Lines While you can also use North American Van Lines for long-distance moves, it's one of the few moving companies that had everything I was looking for -- a history of safety, high marks from customers, industry certifications -- plus the ability to handle local moves. Most interstate moving companies pass local moves onto local movers, but with more than 500 locations around the country, North American Van Lines can handle them in-house.The quote process took a little longer than some of the other moving companies we evaluated, but once I got someone on the phone, I was able to book an in-home estimate in about eight minutes. With more than 1,400 drivers in the US, North American Van Lines also had more flexibility with the moving date than other companies. See at Interstate Moving Moving to another country can be incredibly stressful, and it helps to have a company on your side that knows the paperwork inside and out. Interstate International received a 110 score from the GSA. That means that when the US government sends people overseas, this is the company it trusts to handle the details. Interstate also has both certifications we looked for in an international moving company: FAIM Quality Certification by the FIDI and membership in the International Association of Movers.
    0 Comments ·0 Shares ·31 Views
  • In a last-minute decision, White House decides not to terminate NASA employees | It was not immediately clear what changed.
    arstechnica.com
    Hold the phone In a last-minute decision, White House decides not to terminate NASA employees It was not immediately clear what changed. Eric Berger Feb 18, 2025 7:52 pm | 25 NASA worm. That is all. Credit: Trevor Mahlmann NASA worm. That is all. Credit: Trevor Mahlmann Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreUnlike workers at many other federal agencies this week, probationary employees at NASA were not terminated on Tuesday.For much of the day employees at the space agency anticipated a directive from the White House Office of Personnel Management to fire these employees, but it never came. "We were on pins and needles throughout the day," said one senior official at Johnson Space Center in Houston on Tuesday afternoon.However, by late in the afternoon, several field center directors received confirmation from the White House that their probationary employeesof which there are more than 1,000 across the agency's headquarters and 10 field centerswould not be terminated. NASA had sought exemptions for all of these employees, who comprise about 6 percent of NASA's workforce. Ars could not confirm whether the reprieve applied to some field centers or all 10 of them.The Trump administration has sought to fire federal employees at several federal agencies who are in the "probationary" period of their employment. This includes new hires within the last one or two years or long-time employees who have moved into or been promoted into a new position.So what changed?It was not immediately clear why. A NASA spokesperson in Washington, DC, offered no comment on the updated guidance. Two sources indicated that it was plausible that private astronaut Jared Isaacman, whom President Trump has nominated to lead the space agency, asked for the cuts to be put on hold.Although this could not be confirmed, it seems reasonable that Isaacman would want to retain some control over where cuts at the agency are made. Firing all probationary employeeswhich is the most expedient way to reduce the size of governmentis a blunt instrument. It whacks new hires that the agency may have recruited for key positions, as well as high performers who earned promotions.The reprieve in these terminations does not necessarily signal that NASA will escape significant budget or employment cuts in the coming months.The administration could still seek to terminate probationary employees. In addition, Ars reported earlier that directors at the agency's field centers have been told to prepare options for a "significant" reduction in force in the coming months. The scope of these cuts has not been defined, and it's likely they would need to be negotiated with Congress.Eric BergerSenior Space EditorEric BergerSenior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 25 Comments
    0 Comments ·0 Shares ·29 Views
  • By the end of today, NASAs workforce will be about 10 percent smaller | A dark and painful day at a space agency that brings so much light and joy to the world.
    arstechnica.com
    RIF Watch By the end of today, NASAs workforce will be about 10 percent smaller A dark and painful day at a space agency that brings so much light and joy to the world. Eric Berger Feb 18, 2025 9:52 am | 151 In this illustration, NASAs OSIRIS-REx spacecraft collects a sample from the asteroid Bennu. The agency's superpower is its capacity to dazzle us. Credit: NASA/Goddard/University of Arizona In this illustration, NASAs OSIRIS-REx spacecraft collects a sample from the asteroid Bennu. The agency's superpower is its capacity to dazzle us. Credit: NASA/Goddard/University of Arizona Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSpread across NASA's headquarters and 10 field centers, which dot the United States from sea to sea, the space agency has had a workforce of nearly 18,000 civil servants.However, by the end of today, that number will have shrunk by about 10 percent since the beginning of the second Trump administration four weeks ago. And the world's preeminent space agency may still face significant additional cuts.According to sources, about 750 employees at NASA accepted the "fork in the road" offer to take deferred resignation from the space agency later this year. This sounds like a lot of people, but generally about 1,000 people leave the agency every year, so effectively, many of these people might just be getting paid to leave jobs they were already planning to exit from.The culling of "probationary" employees will be more impactful. As it has done at other federal agencies, the Trump administration is generally firing federal employees who are in the "probationary" period of their employment, which includes new hires within the last one or two years or long-time employees who have moved into or been promoted into a new position. About 1,000 or slightly more employees at NASA were impacted by these cuts.Adding up the deferred resignations and probationary cuts, the Trump White House has now trimmed about 10 percent of the agency's workforce.However, the cuts may not stop there. Two sources told Ars that directors at the agency's field centers have been told to prepare options for a "significant" reduction in force in the coming months. The scope of these cuts has not been defined, and it's possible they may not even happen, given that the White House must negotiate budgets for NASA and other agencies with the US Congress. But this directive for further reductions in force casts more uncertainty on an already demoralized workforce and signals that the Trump administration would like to make further cuts.An awful weekJob losses are always terrible. This will be a dark and painful day at a space agency that brings so much light and joy to the world. Many of the probationary employees are just starting out their careers and were likely thrilled to land a job at NASA to explore the universe. And then all of that youthful energy and hope was extinguished this week.It's possible to view these losses through a couple of lenses.Yes, NASA is clearly losing some capability with these latest cuts. Many of these hires were likely being counted on to bring new energy into the space agency and become its future discoverers and leaders. And their jobs are being sacrificed for no clear purpose. Is it to increase funding for the military? Is it to pay for tax cuts for the rich? There is a lot of anger that the relatively thin budget line of NASAless than one-half of 1 percent of the federal budgetis being sliced for such purposes.There is also frustration at the indiscriminate nature of the cuts. The Trump White House and the Department of Government Efficiency, spearheaded by Elon Musk, have taken a meat-cleaver approach by firing a lot of people at the same time, and probably not the right people, through a messy and painful process. This is not dissimilar to job cuts during corporate mergers or bankruptcies. It's the fastest possible way to make cuts. There is no empathy, and it is a brutal process.Are cuts needed?It is also clear that, as within other federal agencies, there is significant "bloat" in NASA's budget. In some areas, this is plain to see, with the space agency having spent in excess of $3 billion a year over the last decade "developing" a heavy lift rocket, the Space Launch System, which used components from the Space Shuttle and costs an extraordinary amount of money to fly. In the meantime, the private launch industry has been running circles around NASA. Similarly, consider the Orion spacecraft. This program is now two decades old, at a cost of $1 billion a year, and the vehicle has never flown humans into space.One could go on. Much of the space community has been puzzled as to why NASA has been spending on the order of half a billion dollars to develop a Lunar Gateway in an odd orbit around the Moon. It remains years away from launching, and if it ever does fly, it would increase the energy needed to reach the surface of the Moon. The reason, according to multiple sources at the agency when the Gateway was conceived, is that the lunar space station would offer jobs to the current flight controllers operating the International Space Station, which is due to retire in 2030.In recent years, NASA has been in the midst of a difficult transition. The agency deserves a lot of credit for nurturing a commercial space industry that now is the envy of the world. But as part of this, NASA has been moving away from owning and operating its rockets, spacecraft, and other hardware and buying services from this commercial space industry. This transition from traditional space to commercial space marks an important step for NASA to remain on the cutting edge of exploration and science rather than trying to compete with US industry.But it is also a painful step.The key is ensuring that any future cuts at NASA are not indiscriminate. If and when Jared Isaacman is confirmed by the US Senate as the next NASA administrator, it will be up to him and his team to make the programmatic decisions about which parts of the agency are carrying their weight and which are being carried, which investments carry NASA into the future, and which ones drag it into the past. If these future cuts are smart and position NASA for the future, this could all be worth it. If not, then the beloved agency that dares to explore may never recover.Eric BergerSenior Space EditorEric BergerSenior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 151 Comments
    0 Comments ·0 Shares ·30 Views
  • Pokmon Go maker Niantic is reportedly selling its games division
    techcrunch.com
    Niantic, the company behind the popular augmented reality gamePokmon Go, is looking to sell its game development business, Bloomberg reported, citing anonymous sources. The company is reportedly exploring a deal with mobile game developer Scopely, which is owned by Saudi Arabia-based Savvy Games Group, to sell the unit for about $3.5 million.Niantic and Scopely did not immediately respond to requests for comment.Niantic has been among the few companies that have been able to successfully use its augmented reality chops to build games. Its first title, Ingress, was widely praised for its unique, geography-based take on territory control, but the company truly skyrocketed to fame with Pokmon Go, which took off in 2016 and quickly became a global phenomenon. Its subsequent titles have been relatively successful, but not at the scale Pokmon Go enjoyed. In 2022, the company let go of 8% of its staff and shuttered four projects, including Harry Potter: Wizards Unite. In 2023, it laid off 230 employees and canceled its NBA and Marvel-related games.Last year, it updated its Scaniverse app to let users create models of real-world objects and provide the data to developers. In November, the company said it wanted to build a large geospatial model that would use machine learning to understand a scene and connect it to millions of other scenes globally.
    0 Comments ·0 Shares ·32 Views