• IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Comments 0 Shares 0 Reviews
  • Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding

    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks. 
    Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost.
    Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases. 
    Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows. 

    Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows.

    In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities. 
    vides valuable insights for the community.

    Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    #mistral #introduces #codestral #embed #highperformance
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation #mistral #introduces #codestral #embed #highperformance
    WWW.MARKTECHPOST.COM
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at $0.15 per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    0 Comments 0 Shares 0 Reviews
  • Trump figured out how to hit Harvard where it really hurts

    The Trump administration’s recent decision to bar international students from attending Harvard University was less a policy decision than an act of war. The White House had hoped its opening salvo against the nation’s oldest university would yield the kind of immediate capitulation offered by Columbia University. When Harvard chose to fight back instead, Trump decided to hit the university where it hurts most. The administration’s actions are illegal and were immediately stayed by a federal judge. But that won’t prevent real harm to students and higher learning. While Harvard has a famously selective undergraduate college, most of the university’s students are in graduate or professional school, and more than a third of those older students arrive from other countries. Overall, more than a quarter of Harvard’s 25,000 students come from outside the United States, a percentage that has steadily grown over time. The proportion of Harvard’s international students has increased 38 percent since 2006. Even if the courts continue to block this move, it will be difficult for anyone to study there knowing they might be deported or imprisoned by a hostile regime — even if they’re the future queen of Belgium. And an exodus of international students will end up harming universities far beyond Harvard, as well as American research and innovation itself. The question looming over higher education is whether the international student ban is merely the next escalation of the Trump administration’s apocalyptic campaign against a handful of elite institutions— or the beginning of a broader attempt to apply “America First” protectionist principles to one the nation’s most valuable and successful export goods: higher learning. The rapid growth of international college students in the 21st century represents exactly the kind of global cooperation the isolationists in the White House would love to destroy. International students helped buoy American universities after the Great RecessionIn recent decades, international enrollment has shaped, and in some places transformed, higher learning across the country. According to the State Department, the number of annual F-1 student visas issued to international students nearly tripled from 216,000 in 2003 to 644,000 in 2015. And while many nations sent more students to America during that time, the story of international college enrollment over the last two decades has been dominated by a single country: the People’s Republic of China. In 1997, roughly 12,000 F-1 visas were issued to Chinese students; this was only a third of the number issued to the two biggest student senders that year, South Korea and Japan. Chinese enrollment started to accelerate in the early aughts and then exploded: 114,000 by 2010; 190,000 in 2012; and a peak of 274,000 in 2015. The change was driven by profound social and economic shifts within China. Mao Zedong’s Cultural Revolution essentially shut down university enrollment for a decade. When it ended in 1976, there was a huge backlog of college students who graduated in the 1980s into the economic liberalization of Deng Xiaoping. Many of them prospered and had children — often only one — who came of age in the early 2000s. Attending an American university was a status marker and an opportunity to become a global citizen. At the same time, many colleges were newly hungry for international enrollment. The Great Recession savaged college finances. State governments slashed funding for public universities while families had less money to pay tuition at private colleges. Public universities offer lower prices to state residents and private schools typically discount their sticker-price tuition by more than 50 percent through grants and scholarships. But those rules only apply to Americans. Recruiting so-called full-pay international students became a key strategy for shoring up the bottom line. Colleges weren’t always judicious in managing the influx of students from overseas. Purdue University enrolled so many Chinese students so quickly that in 2013 one of them noted that a main benefit of traveling 7,000 miles to West Lafayette, Indiana, was improving his language skills — by talking to students from other regions of China. That same year, an administrator at a second-tier private college in Philadelphia told me that the college tried to keep enrollment from any one country below a certain threshold “or else we’d have to build them a student center or something.” While federal law prohibits colleges from paying recruiters based on the number of students they sign up, this, too, only applies within American borders. International students sometimes pay middlemen large sums to help them navigate the huge and varied global college landscape. While many are legitimate, some are prone to falsehoods and fraud. At the same time, colleges also used the new influx of students to expand course offerings, build strong connections overseas, and diversify their academic communities. One of the great educational benefits of going to college is learning among people from different experiences and backgrounds. There has likely never been a better place to do that than an American college campus in the 21st century. The most talented international students helped drive American economic productivity and research supremacy to new heights. F-1 visas declined sharply in 2016, in part because of an administrative change that allowed Chinese students to receive five-year visas instead of reapplying every year. But the market itself was also shifting. The Chinese government invested enormous sums to build the capacity of its own national research universities, giving students better options to stay home. Geopolitical tensions were growing, and American voters chose to elect a rabidly xenophobic president in Donald Trump. Covid radically depressed international enrollment in 2020, but even after the recovery, Chinese F-1 visas in 2023 were only a third of the 2015 peak. Colleges managed by recruiting students from other countries to take their place. India crossed 100,000 student visa for the first time in 2022. At the turn of the century, fewer than 1,000 Vietnamese students studied in America. Today, Vietnam is our fourth-largest source of international students, more than Japan, Mexico, Germany, or Brazil. Enrollment from Ghana has quintupled in the last 10 years.A catastrophe for American science and innovationIf the Trump administration expands its scorched-earth student visa strategy beyond Harvard, it won’t just be the liberal enclaves and snooty college towns that suffer. Communities across the country will feel the hurt, urban and rural, in red states and blue. Some colleges might tip into bankruptcy. Others will make fewer hires and produce fewer graduates for local employers. Even before the visa ban, the government of Norway set aside money to lure away American scholars whose research has been devastated by deep Trump administration cuts to scientific research. Other countries are sure to follow. And if international students stop coming to the US, it will be a catastrophe for American leadership in science and technology. World-class research universities are magnets for global talent. Cambridge, Massachusetts, is a worldwide center of medical breakthroughs because Harvard and its neighbor MIT attract some of the smartest people in the world, who often stay in the United States to found new companies and conduct research. The same dynamic drives technology innovation around Stanford and UC Berkeley in Silicon Valley, and in university towns nationwide. If you or a loved one benefited from a new cancer treatment, there’s a good chance the person who saved your life came to America on the kind of student visa the Trump administration is trying to destroy. Like printing the global reserve currency or having a good relationship with Canada, getting the pick of international students is one of those incredibly valuable things that Americans won’t fully appreciate until someone is stupid enough to throw it away. In 2021, JD Vance told a group of movement conservatives that “we have to honestly and aggressively attack the universities in this country.” The administration has more than made good on his word, in part because the electorate is rapidly reorganizing around education attainment, with college graduates clustering in the Democratic party and nongraduates moving to the Republican side. Trump and his minions see elite colleges and universities as enemy fortresses in the culture wars, training grounds for the opposition that must be razed and broken.Modern colleges look like the future that MAGA forces most fear. Visitors to campus today see students from scores of global communities, speaking multiple languages and practicing different cultural traditions. Places where people from other countries are welcome, and no single race, nationality, or religion reigns supreme. People like JD Vance are so terrified by this vision that they would rather destroy America’s world-leading higher education system and terrorize hundreds of thousands of people who are in this country legally and only want to learn. See More:
    #trump #figured #out #how #hit
    Trump figured out how to hit Harvard where it really hurts
    The Trump administration’s recent decision to bar international students from attending Harvard University was less a policy decision than an act of war. The White House had hoped its opening salvo against the nation’s oldest university would yield the kind of immediate capitulation offered by Columbia University. When Harvard chose to fight back instead, Trump decided to hit the university where it hurts most. The administration’s actions are illegal and were immediately stayed by a federal judge. But that won’t prevent real harm to students and higher learning. While Harvard has a famously selective undergraduate college, most of the university’s students are in graduate or professional school, and more than a third of those older students arrive from other countries. Overall, more than a quarter of Harvard’s 25,000 students come from outside the United States, a percentage that has steadily grown over time. The proportion of Harvard’s international students has increased 38 percent since 2006. Even if the courts continue to block this move, it will be difficult for anyone to study there knowing they might be deported or imprisoned by a hostile regime — even if they’re the future queen of Belgium. And an exodus of international students will end up harming universities far beyond Harvard, as well as American research and innovation itself. The question looming over higher education is whether the international student ban is merely the next escalation of the Trump administration’s apocalyptic campaign against a handful of elite institutions— or the beginning of a broader attempt to apply “America First” protectionist principles to one the nation’s most valuable and successful export goods: higher learning. The rapid growth of international college students in the 21st century represents exactly the kind of global cooperation the isolationists in the White House would love to destroy. International students helped buoy American universities after the Great RecessionIn recent decades, international enrollment has shaped, and in some places transformed, higher learning across the country. According to the State Department, the number of annual F-1 student visas issued to international students nearly tripled from 216,000 in 2003 to 644,000 in 2015. And while many nations sent more students to America during that time, the story of international college enrollment over the last two decades has been dominated by a single country: the People’s Republic of China. In 1997, roughly 12,000 F-1 visas were issued to Chinese students; this was only a third of the number issued to the two biggest student senders that year, South Korea and Japan. Chinese enrollment started to accelerate in the early aughts and then exploded: 114,000 by 2010; 190,000 in 2012; and a peak of 274,000 in 2015. The change was driven by profound social and economic shifts within China. Mao Zedong’s Cultural Revolution essentially shut down university enrollment for a decade. When it ended in 1976, there was a huge backlog of college students who graduated in the 1980s into the economic liberalization of Deng Xiaoping. Many of them prospered and had children — often only one — who came of age in the early 2000s. Attending an American university was a status marker and an opportunity to become a global citizen. At the same time, many colleges were newly hungry for international enrollment. The Great Recession savaged college finances. State governments slashed funding for public universities while families had less money to pay tuition at private colleges. Public universities offer lower prices to state residents and private schools typically discount their sticker-price tuition by more than 50 percent through grants and scholarships. But those rules only apply to Americans. Recruiting so-called full-pay international students became a key strategy for shoring up the bottom line. Colleges weren’t always judicious in managing the influx of students from overseas. Purdue University enrolled so many Chinese students so quickly that in 2013 one of them noted that a main benefit of traveling 7,000 miles to West Lafayette, Indiana, was improving his language skills — by talking to students from other regions of China. That same year, an administrator at a second-tier private college in Philadelphia told me that the college tried to keep enrollment from any one country below a certain threshold “or else we’d have to build them a student center or something.” While federal law prohibits colleges from paying recruiters based on the number of students they sign up, this, too, only applies within American borders. International students sometimes pay middlemen large sums to help them navigate the huge and varied global college landscape. While many are legitimate, some are prone to falsehoods and fraud. At the same time, colleges also used the new influx of students to expand course offerings, build strong connections overseas, and diversify their academic communities. One of the great educational benefits of going to college is learning among people from different experiences and backgrounds. There has likely never been a better place to do that than an American college campus in the 21st century. The most talented international students helped drive American economic productivity and research supremacy to new heights. F-1 visas declined sharply in 2016, in part because of an administrative change that allowed Chinese students to receive five-year visas instead of reapplying every year. But the market itself was also shifting. The Chinese government invested enormous sums to build the capacity of its own national research universities, giving students better options to stay home. Geopolitical tensions were growing, and American voters chose to elect a rabidly xenophobic president in Donald Trump. Covid radically depressed international enrollment in 2020, but even after the recovery, Chinese F-1 visas in 2023 were only a third of the 2015 peak. Colleges managed by recruiting students from other countries to take their place. India crossed 100,000 student visa for the first time in 2022. At the turn of the century, fewer than 1,000 Vietnamese students studied in America. Today, Vietnam is our fourth-largest source of international students, more than Japan, Mexico, Germany, or Brazil. Enrollment from Ghana has quintupled in the last 10 years.A catastrophe for American science and innovationIf the Trump administration expands its scorched-earth student visa strategy beyond Harvard, it won’t just be the liberal enclaves and snooty college towns that suffer. Communities across the country will feel the hurt, urban and rural, in red states and blue. Some colleges might tip into bankruptcy. Others will make fewer hires and produce fewer graduates for local employers. Even before the visa ban, the government of Norway set aside money to lure away American scholars whose research has been devastated by deep Trump administration cuts to scientific research. Other countries are sure to follow. And if international students stop coming to the US, it will be a catastrophe for American leadership in science and technology. World-class research universities are magnets for global talent. Cambridge, Massachusetts, is a worldwide center of medical breakthroughs because Harvard and its neighbor MIT attract some of the smartest people in the world, who often stay in the United States to found new companies and conduct research. The same dynamic drives technology innovation around Stanford and UC Berkeley in Silicon Valley, and in university towns nationwide. If you or a loved one benefited from a new cancer treatment, there’s a good chance the person who saved your life came to America on the kind of student visa the Trump administration is trying to destroy. Like printing the global reserve currency or having a good relationship with Canada, getting the pick of international students is one of those incredibly valuable things that Americans won’t fully appreciate until someone is stupid enough to throw it away. In 2021, JD Vance told a group of movement conservatives that “we have to honestly and aggressively attack the universities in this country.” The administration has more than made good on his word, in part because the electorate is rapidly reorganizing around education attainment, with college graduates clustering in the Democratic party and nongraduates moving to the Republican side. Trump and his minions see elite colleges and universities as enemy fortresses in the culture wars, training grounds for the opposition that must be razed and broken.Modern colleges look like the future that MAGA forces most fear. Visitors to campus today see students from scores of global communities, speaking multiple languages and practicing different cultural traditions. Places where people from other countries are welcome, and no single race, nationality, or religion reigns supreme. People like JD Vance are so terrified by this vision that they would rather destroy America’s world-leading higher education system and terrorize hundreds of thousands of people who are in this country legally and only want to learn. See More: #trump #figured #out #how #hit
    WWW.VOX.COM
    Trump figured out how to hit Harvard where it really hurts
    The Trump administration’s recent decision to bar international students from attending Harvard University was less a policy decision than an act of war. The White House had hoped its opening salvo against the nation’s oldest university would yield the kind of immediate capitulation offered by Columbia University. When Harvard chose to fight back instead, Trump decided to hit the university where it hurts most. The administration’s actions are illegal and were immediately stayed by a federal judge. But that won’t prevent real harm to students and higher learning. While Harvard has a famously selective undergraduate college, most of the university’s students are in graduate or professional school, and more than a third of those older students arrive from other countries. Overall, more than a quarter of Harvard’s 25,000 students come from outside the United States, a percentage that has steadily grown over time. The proportion of Harvard’s international students has increased 38 percent since 2006. Even if the courts continue to block this move, it will be difficult for anyone to study there knowing they might be deported or imprisoned by a hostile regime — even if they’re the future queen of Belgium. And an exodus of international students will end up harming universities far beyond Harvard, as well as American research and innovation itself. The question looming over higher education is whether the international student ban is merely the next escalation of the Trump administration’s apocalyptic campaign against a handful of elite institutions (as seen by the administration’s announcement Tuesday that it would cancel its remaining federal contracts with Harvard) — or the beginning of a broader attempt to apply “America First” protectionist principles to one the nation’s most valuable and successful export goods: higher learning. The rapid growth of international college students in the 21st century represents exactly the kind of global cooperation the isolationists in the White House would love to destroy. International students helped buoy American universities after the Great RecessionIn recent decades, international enrollment has shaped, and in some places transformed, higher learning across the country. According to the State Department, the number of annual F-1 student visas issued to international students nearly tripled from 216,000 in 2003 to 644,000 in 2015. And while many nations sent more students to America during that time, the story of international college enrollment over the last two decades has been dominated by a single country: the People’s Republic of China. In 1997, roughly 12,000 F-1 visas were issued to Chinese students; this was only a third of the number issued to the two biggest student senders that year, South Korea and Japan. Chinese enrollment started to accelerate in the early aughts and then exploded: 114,000 by 2010; 190,000 in 2012; and a peak of 274,000 in 2015. The change was driven by profound social and economic shifts within China. Mao Zedong’s Cultural Revolution essentially shut down university enrollment for a decade. When it ended in 1976, there was a huge backlog of college students who graduated in the 1980s into the economic liberalization of Deng Xiaoping. Many of them prospered and had children — often only one — who came of age in the early 2000s. Attending an American university was a status marker and an opportunity to become a global citizen. At the same time, many colleges were newly hungry for international enrollment. The Great Recession savaged college finances. State governments slashed funding for public universities while families had less money to pay tuition at private colleges. Public universities offer lower prices to state residents and private schools typically discount their sticker-price tuition by more than 50 percent through grants and scholarships. But those rules only apply to Americans. Recruiting so-called full-pay international students became a key strategy for shoring up the bottom line. Colleges weren’t always judicious in managing the influx of students from overseas. Purdue University enrolled so many Chinese students so quickly that in 2013 one of them noted that a main benefit of traveling 7,000 miles to West Lafayette, Indiana, was improving his language skills — by talking to students from other regions of China. That same year, an administrator at a second-tier private college in Philadelphia told me that the college tried to keep enrollment from any one country below a certain threshold “or else we’d have to build them a student center or something.” While federal law prohibits colleges from paying recruiters based on the number of students they sign up, this, too, only applies within American borders. International students sometimes pay middlemen large sums to help them navigate the huge and varied global college landscape. While many are legitimate, some are prone to falsehoods and fraud. At the same time, colleges also used the new influx of students to expand course offerings, build strong connections overseas, and diversify their academic communities. One of the great educational benefits of going to college is learning among people from different experiences and backgrounds. There has likely never been a better place to do that than an American college campus in the 21st century. The most talented international students helped drive American economic productivity and research supremacy to new heights. F-1 visas declined sharply in 2016, in part because of an administrative change that allowed Chinese students to receive five-year visas instead of reapplying every year. But the market itself was also shifting. The Chinese government invested enormous sums to build the capacity of its own national research universities, giving students better options to stay home. Geopolitical tensions were growing, and American voters chose to elect a rabidly xenophobic president in Donald Trump. Covid radically depressed international enrollment in 2020, but even after the recovery, Chinese F-1 visas in 2023 were only a third of the 2015 peak. Colleges managed by recruiting students from other countries to take their place. India crossed 100,000 student visa for the first time in 2022. At the turn of the century, fewer than 1,000 Vietnamese students studied in America. Today, Vietnam is our fourth-largest source of international students, more than Japan, Mexico, Germany, or Brazil. Enrollment from Ghana has quintupled in the last 10 years.A catastrophe for American science and innovationIf the Trump administration expands its scorched-earth student visa strategy beyond Harvard, it won’t just be the liberal enclaves and snooty college towns that suffer. Communities across the country will feel the hurt, urban and rural, in red states and blue. Some colleges might tip into bankruptcy. Others will make fewer hires and produce fewer graduates for local employers. Even before the visa ban, the government of Norway set aside money to lure away American scholars whose research has been devastated by deep Trump administration cuts to scientific research. Other countries are sure to follow. And if international students stop coming to the US, it will be a catastrophe for American leadership in science and technology. World-class research universities are magnets for global talent. Cambridge, Massachusetts, is a worldwide center of medical breakthroughs because Harvard and its neighbor MIT attract some of the smartest people in the world, who often stay in the United States to found new companies and conduct research. The same dynamic drives technology innovation around Stanford and UC Berkeley in Silicon Valley, and in university towns nationwide. If you or a loved one benefited from a new cancer treatment, there’s a good chance the person who saved your life came to America on the kind of student visa the Trump administration is trying to destroy. Like printing the global reserve currency or having a good relationship with Canada, getting the pick of international students is one of those incredibly valuable things that Americans won’t fully appreciate until someone is stupid enough to throw it away. In 2021, JD Vance told a group of movement conservatives that “we have to honestly and aggressively attack the universities in this country.” The administration has more than made good on his word, in part because the electorate is rapidly reorganizing around education attainment, with college graduates clustering in the Democratic party and nongraduates moving to the Republican side. Trump and his minions see elite colleges and universities as enemy fortresses in the culture wars, training grounds for the opposition that must be razed and broken.Modern colleges look like the future that MAGA forces most fear. Visitors to campus today see students from scores of global communities, speaking multiple languages and practicing different cultural traditions. Places where people from other countries are welcome, and no single race, nationality, or religion reigns supreme. People like JD Vance are so terrified by this vision that they would rather destroy America’s world-leading higher education system and terrorize hundreds of thousands of people who are in this country legally and only want to learn. See More:
    10 Comments 0 Shares 0 Reviews
  • Penguin poop may help preserve Antarctic climate

    smelly shield

    Penguin poop may help preserve Antarctic climate

    Ammonia aerosols from penguin guano likely play a part in the formation of heat-shielding clouds.

    Bob Berwyn, Inside Climate News



    May 24, 2025 7:07 am

    |

    4

    Credit:

    Getty

    Credit:

    Getty

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    New research shows that penguin guano in Antarctica is an important source of ammonia aerosol particles that help drive the formation and persistence of low clouds, which cool the climate by reflecting some incoming sunlight back to space.
    The findings reinforce the growing awareness that Earth’s intricate web of life plays a significant role in shaping the planetary climate. Even at the small levels measured, the ammonia particles from the guano interact with sulfur-based aerosols from ocean algae to start a chemical chain reaction that forms billions of tiny particles that serve as nuclei for water vapor droplets.
    The low marine clouds that often cover big tracts of the Southern Ocean around Antarctica are a wild card in the climate system because scientists don’t fully understand how they will react to human-caused heating of the atmosphere and oceans. One recent study suggested that the big increase in the annual global temperature during 2023 and 2024 that has continued into this year was caused in part by a reduction of that cloud cover.
    “I’m constantly surprised at the depth of how one small change affects everything else,” said Matthew Boyer, a coauthor of the new study and an atmospheric scientist at the University of Helsinki’s Institute for Atmospheric and Earth System Research. “This really does show that there is a deep connection between ecosystem processes and the climate. And really, it’s the synergy between what’s coming from the oceans, from the sulfur-producing species, and then the ammonia coming from the penguins.”
    Climate survivors
    Aquatic penguins evolved from flying birds about 60 million years ago, shortly after the age of dinosaurs, and have persisted through multiple, slow, natural cycles of ice ages and warmer interglacial eras, surviving climate extremes by migrating to and from pockets of suitable habitat, called climate refugia, said Rose Foster-Dyer, a marine and polar ecologist with the University of Canterbury in New Zealand.
    A 2018 study that analyzed the remains of an ancient “super colony” of the birds suggests there may have been a “penguin optimum” climate window between about 4,000 and 2,000 years ago, at least for some species in some parts of Antarctica, she said. Various penguin species have adapted to different habitat niches and this will face different impacts caused by human-caused warming, she said.

    Foster-Dyer has recently done penguin research around the Ross Sea, and said that climate change could open more areas for land-breeding Adélie penguins, which don’t breed on ice like some other species.
    “There’s evidence that this whole area used to have many more colonies … which could possibly be repopulated in the future,” she said. She is also more optimistic than some scientists about the future for emperor penguins, the largest species of the group, she added.
    “They breed on fast ice, and there’s a lot of publications coming out about how the populations might be declining and their habitat is hugely threatened,” she said. “But they’ve lived through so many different cycles of the climate, so I think they’re more adaptable than people currently give them credit for.”
    In total, about 20 million breeding pairs of penguins nest in vast colonies all around the frozen continent. Some of the largest colonies, with up to 1 million breeding pairs, can cover several square miles.There aren’t any solid estimates for the total amount of guano produced by the flightless birds annually, but some studies have found that individual colonies can produce several hundred tons. Several new penguin colonies were discovered recently when their droppings were spotted in detailed satellite images.
    A few penguin colonies have grown recently while others appear to be shrinking, but in general, their habitat is considered threatened by warming and changing ice conditions, which affects their food supplies. The speed of human-caused warming, for which there is no precedent in paleoclimate records, may exacerbate the threat to penguins, which evolve slowly compared to many other species, Foster-Dyer said.
    “Everything’s changing at such a fast rate, it’s really hard to say much about anything,” she said.
    Recent research has shown how other types of marine life are also important to the global climate system. Nutrients from bird droppings help fertilize blooms of oxygen-producing plankton, and huge swarms of fish that live in the middle layers of the ocean cycle carbon vertically through the water, ultimately depositing it in a generally stable sediment layer on the seafloor.

    Tricky measurements
    Boyer said the new research started as a follow-up project to other studies of atmospheric chemistry in the same area, near the Argentine Marambio Base on an island along the Antarctic Peninsula. Observations by other teams suggested it could be worth specifically trying to look at ammonia, he said.
    Boyer and the other scientists set up specialized equipment to measure the concentration of ammonia in the air from January to March 2023. They found that, when the wind blew from the direction of a colony of about 60,000 Adélie penguins about 5 miles away, the ammonia concentration increased to as high as 13.5 parts per billion—more than 1,000 times higher than the background reading. Even after the penguins migrated from the area toward the end of February, the ammonia concentration was still more than 100 times as high as the background level.
    “We have one instrument that we use in the study to give us the chemistry of gases as they’re actually clustering together,” he said.
    “In general, ammonia in the atmosphere is not well-measured because it’s really difficult to measure, especially if you want to measure at a very high sensitivity, if you have low concentrations like in Antarctica,” he said.
    Penguin-scented winds
    The goal was to determine where the ammonia is coming from, including testing a previous hypothesis that the ocean surface could be the source, he said.
    But the size of the penguin colonies made them the most likely source.
    “It’s well known that sea birds give off ammonia. You can smell them. The birds stink,” he said. “But we didn’t know how much there was. So what we did with this study was to quantify ammonia and to quantify its impact on the cloud formation process.”
    The scientists had to wait until the wind blew from the penguin colony toward the research station.
    “If we’re lucky, the wind blows from that direction and not from the direction of the power generator,” he said. “And we were lucky enough that we had one specific event where the winds from the penguin colony persisted long enough that we were actually able to track the growth of the particles. You could be there for a year, and it might not happen.”

    The ammonia from the guano does not form the particles but supercharges the process that does, Boyer said.
    “It’s really the dimethyl sulfide from phytoplankton that gives off the sulfur,” he said. “The ammonia enhances the formation rate of particles. Without ammonia, sulfuric acid can form new particles, but with ammonia, it’s 1,000 times faster, and sometimes even more, so we’re talking up to four orders of magnitude faster because of the guano.”
    This is important in Antarctica specifically because there are not many other sources of particles, such as pollution or emissions from trees, he added.
    “So the strength of the source matters in terms of its climate effect over time,” he said. “And if the source changes, it’s going to change the climate effect.”
    It will take more research to determine if penguin guano has a net cooling effect on the climate. But in general, he said, if the particles transport out to sea and contribute to cloud formation, they will have a cooling effect.
    “What’s also interesting,” he said, “is if the clouds are over ice surfaces, it could actually lead to warming because the clouds are less reflective than the ice beneath.” In that case, the clouds could actually reduce the amount of heat that brighter ice would otherwise reflect away from the planet. The study did not try to measure that effect, but it could be an important subject for future research, he added.
    The guano effect lingers even after the birds leave the breeding areas. A month after they were gone, Boyer said ammonia levels in the air were still 1,000 times higher than the baseline.
    “The emission of ammonia is a temperature-dependent process, so it’s likely that once wintertime comes, the ammonia gets frozen in,” he said. “But even before the penguins come back, I would hypothesize that as the temperature warms, the guano starts to emit ammonia again. And the penguins move all around the coast, so it’s possible they’re just fertilizing an entire coast with ammonia.”

    Bob Berwyn, Inside Climate News

    4 Comments
    #penguin #poop #help #preserve #antarctic
    Penguin poop may help preserve Antarctic climate
    smelly shield Penguin poop may help preserve Antarctic climate Ammonia aerosols from penguin guano likely play a part in the formation of heat-shielding clouds. Bob Berwyn, Inside Climate News – May 24, 2025 7:07 am | 4 Credit: Getty Credit: Getty Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. New research shows that penguin guano in Antarctica is an important source of ammonia aerosol particles that help drive the formation and persistence of low clouds, which cool the climate by reflecting some incoming sunlight back to space. The findings reinforce the growing awareness that Earth’s intricate web of life plays a significant role in shaping the planetary climate. Even at the small levels measured, the ammonia particles from the guano interact with sulfur-based aerosols from ocean algae to start a chemical chain reaction that forms billions of tiny particles that serve as nuclei for water vapor droplets. The low marine clouds that often cover big tracts of the Southern Ocean around Antarctica are a wild card in the climate system because scientists don’t fully understand how they will react to human-caused heating of the atmosphere and oceans. One recent study suggested that the big increase in the annual global temperature during 2023 and 2024 that has continued into this year was caused in part by a reduction of that cloud cover. “I’m constantly surprised at the depth of how one small change affects everything else,” said Matthew Boyer, a coauthor of the new study and an atmospheric scientist at the University of Helsinki’s Institute for Atmospheric and Earth System Research. “This really does show that there is a deep connection between ecosystem processes and the climate. And really, it’s the synergy between what’s coming from the oceans, from the sulfur-producing species, and then the ammonia coming from the penguins.” Climate survivors Aquatic penguins evolved from flying birds about 60 million years ago, shortly after the age of dinosaurs, and have persisted through multiple, slow, natural cycles of ice ages and warmer interglacial eras, surviving climate extremes by migrating to and from pockets of suitable habitat, called climate refugia, said Rose Foster-Dyer, a marine and polar ecologist with the University of Canterbury in New Zealand. A 2018 study that analyzed the remains of an ancient “super colony” of the birds suggests there may have been a “penguin optimum” climate window between about 4,000 and 2,000 years ago, at least for some species in some parts of Antarctica, she said. Various penguin species have adapted to different habitat niches and this will face different impacts caused by human-caused warming, she said. Foster-Dyer has recently done penguin research around the Ross Sea, and said that climate change could open more areas for land-breeding Adélie penguins, which don’t breed on ice like some other species. “There’s evidence that this whole area used to have many more colonies … which could possibly be repopulated in the future,” she said. She is also more optimistic than some scientists about the future for emperor penguins, the largest species of the group, she added. “They breed on fast ice, and there’s a lot of publications coming out about how the populations might be declining and their habitat is hugely threatened,” she said. “But they’ve lived through so many different cycles of the climate, so I think they’re more adaptable than people currently give them credit for.” In total, about 20 million breeding pairs of penguins nest in vast colonies all around the frozen continent. Some of the largest colonies, with up to 1 million breeding pairs, can cover several square miles.There aren’t any solid estimates for the total amount of guano produced by the flightless birds annually, but some studies have found that individual colonies can produce several hundred tons. Several new penguin colonies were discovered recently when their droppings were spotted in detailed satellite images. A few penguin colonies have grown recently while others appear to be shrinking, but in general, their habitat is considered threatened by warming and changing ice conditions, which affects their food supplies. The speed of human-caused warming, for which there is no precedent in paleoclimate records, may exacerbate the threat to penguins, which evolve slowly compared to many other species, Foster-Dyer said. “Everything’s changing at such a fast rate, it’s really hard to say much about anything,” she said. Recent research has shown how other types of marine life are also important to the global climate system. Nutrients from bird droppings help fertilize blooms of oxygen-producing plankton, and huge swarms of fish that live in the middle layers of the ocean cycle carbon vertically through the water, ultimately depositing it in a generally stable sediment layer on the seafloor. Tricky measurements Boyer said the new research started as a follow-up project to other studies of atmospheric chemistry in the same area, near the Argentine Marambio Base on an island along the Antarctic Peninsula. Observations by other teams suggested it could be worth specifically trying to look at ammonia, he said. Boyer and the other scientists set up specialized equipment to measure the concentration of ammonia in the air from January to March 2023. They found that, when the wind blew from the direction of a colony of about 60,000 Adélie penguins about 5 miles away, the ammonia concentration increased to as high as 13.5 parts per billion—more than 1,000 times higher than the background reading. Even after the penguins migrated from the area toward the end of February, the ammonia concentration was still more than 100 times as high as the background level. “We have one instrument that we use in the study to give us the chemistry of gases as they’re actually clustering together,” he said. “In general, ammonia in the atmosphere is not well-measured because it’s really difficult to measure, especially if you want to measure at a very high sensitivity, if you have low concentrations like in Antarctica,” he said. Penguin-scented winds The goal was to determine where the ammonia is coming from, including testing a previous hypothesis that the ocean surface could be the source, he said. But the size of the penguin colonies made them the most likely source. “It’s well known that sea birds give off ammonia. You can smell them. The birds stink,” he said. “But we didn’t know how much there was. So what we did with this study was to quantify ammonia and to quantify its impact on the cloud formation process.” The scientists had to wait until the wind blew from the penguin colony toward the research station. “If we’re lucky, the wind blows from that direction and not from the direction of the power generator,” he said. “And we were lucky enough that we had one specific event where the winds from the penguin colony persisted long enough that we were actually able to track the growth of the particles. You could be there for a year, and it might not happen.” The ammonia from the guano does not form the particles but supercharges the process that does, Boyer said. “It’s really the dimethyl sulfide from phytoplankton that gives off the sulfur,” he said. “The ammonia enhances the formation rate of particles. Without ammonia, sulfuric acid can form new particles, but with ammonia, it’s 1,000 times faster, and sometimes even more, so we’re talking up to four orders of magnitude faster because of the guano.” This is important in Antarctica specifically because there are not many other sources of particles, such as pollution or emissions from trees, he added. “So the strength of the source matters in terms of its climate effect over time,” he said. “And if the source changes, it’s going to change the climate effect.” It will take more research to determine if penguin guano has a net cooling effect on the climate. But in general, he said, if the particles transport out to sea and contribute to cloud formation, they will have a cooling effect. “What’s also interesting,” he said, “is if the clouds are over ice surfaces, it could actually lead to warming because the clouds are less reflective than the ice beneath.” In that case, the clouds could actually reduce the amount of heat that brighter ice would otherwise reflect away from the planet. The study did not try to measure that effect, but it could be an important subject for future research, he added. The guano effect lingers even after the birds leave the breeding areas. A month after they were gone, Boyer said ammonia levels in the air were still 1,000 times higher than the baseline. “The emission of ammonia is a temperature-dependent process, so it’s likely that once wintertime comes, the ammonia gets frozen in,” he said. “But even before the penguins come back, I would hypothesize that as the temperature warms, the guano starts to emit ammonia again. And the penguins move all around the coast, so it’s possible they’re just fertilizing an entire coast with ammonia.” Bob Berwyn, Inside Climate News 4 Comments #penguin #poop #help #preserve #antarctic
    ARSTECHNICA.COM
    Penguin poop may help preserve Antarctic climate
    smelly shield Penguin poop may help preserve Antarctic climate Ammonia aerosols from penguin guano likely play a part in the formation of heat-shielding clouds. Bob Berwyn, Inside Climate News – May 24, 2025 7:07 am | 4 Credit: Getty Credit: Getty Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. New research shows that penguin guano in Antarctica is an important source of ammonia aerosol particles that help drive the formation and persistence of low clouds, which cool the climate by reflecting some incoming sunlight back to space. The findings reinforce the growing awareness that Earth’s intricate web of life plays a significant role in shaping the planetary climate. Even at the small levels measured, the ammonia particles from the guano interact with sulfur-based aerosols from ocean algae to start a chemical chain reaction that forms billions of tiny particles that serve as nuclei for water vapor droplets. The low marine clouds that often cover big tracts of the Southern Ocean around Antarctica are a wild card in the climate system because scientists don’t fully understand how they will react to human-caused heating of the atmosphere and oceans. One recent study suggested that the big increase in the annual global temperature during 2023 and 2024 that has continued into this year was caused in part by a reduction of that cloud cover. “I’m constantly surprised at the depth of how one small change affects everything else,” said Matthew Boyer, a coauthor of the new study and an atmospheric scientist at the University of Helsinki’s Institute for Atmospheric and Earth System Research. “This really does show that there is a deep connection between ecosystem processes and the climate. And really, it’s the synergy between what’s coming from the oceans, from the sulfur-producing species, and then the ammonia coming from the penguins.” Climate survivors Aquatic penguins evolved from flying birds about 60 million years ago, shortly after the age of dinosaurs, and have persisted through multiple, slow, natural cycles of ice ages and warmer interglacial eras, surviving climate extremes by migrating to and from pockets of suitable habitat, called climate refugia, said Rose Foster-Dyer, a marine and polar ecologist with the University of Canterbury in New Zealand. A 2018 study that analyzed the remains of an ancient “super colony” of the birds suggests there may have been a “penguin optimum” climate window between about 4,000 and 2,000 years ago, at least for some species in some parts of Antarctica, she said. Various penguin species have adapted to different habitat niches and this will face different impacts caused by human-caused warming, she said. Foster-Dyer has recently done penguin research around the Ross Sea, and said that climate change could open more areas for land-breeding Adélie penguins, which don’t breed on ice like some other species. “There’s evidence that this whole area used to have many more colonies … which could possibly be repopulated in the future,” she said. She is also more optimistic than some scientists about the future for emperor penguins, the largest species of the group, she added. “They breed on fast ice, and there’s a lot of publications coming out about how the populations might be declining and their habitat is hugely threatened,” she said. “But they’ve lived through so many different cycles of the climate, so I think they’re more adaptable than people currently give them credit for.” In total, about 20 million breeding pairs of penguins nest in vast colonies all around the frozen continent. Some of the largest colonies, with up to 1 million breeding pairs, can cover several square miles.There aren’t any solid estimates for the total amount of guano produced by the flightless birds annually, but some studies have found that individual colonies can produce several hundred tons. Several new penguin colonies were discovered recently when their droppings were spotted in detailed satellite images. A few penguin colonies have grown recently while others appear to be shrinking, but in general, their habitat is considered threatened by warming and changing ice conditions, which affects their food supplies. The speed of human-caused warming, for which there is no precedent in paleoclimate records, may exacerbate the threat to penguins, which evolve slowly compared to many other species, Foster-Dyer said. “Everything’s changing at such a fast rate, it’s really hard to say much about anything,” she said. Recent research has shown how other types of marine life are also important to the global climate system. Nutrients from bird droppings help fertilize blooms of oxygen-producing plankton, and huge swarms of fish that live in the middle layers of the ocean cycle carbon vertically through the water, ultimately depositing it in a generally stable sediment layer on the seafloor. Tricky measurements Boyer said the new research started as a follow-up project to other studies of atmospheric chemistry in the same area, near the Argentine Marambio Base on an island along the Antarctic Peninsula. Observations by other teams suggested it could be worth specifically trying to look at ammonia, he said. Boyer and the other scientists set up specialized equipment to measure the concentration of ammonia in the air from January to March 2023. They found that, when the wind blew from the direction of a colony of about 60,000 Adélie penguins about 5 miles away, the ammonia concentration increased to as high as 13.5 parts per billion—more than 1,000 times higher than the background reading. Even after the penguins migrated from the area toward the end of February, the ammonia concentration was still more than 100 times as high as the background level. “We have one instrument that we use in the study to give us the chemistry of gases as they’re actually clustering together,” he said. “In general, ammonia in the atmosphere is not well-measured because it’s really difficult to measure, especially if you want to measure at a very high sensitivity, if you have low concentrations like in Antarctica,” he said. Penguin-scented winds The goal was to determine where the ammonia is coming from, including testing a previous hypothesis that the ocean surface could be the source, he said. But the size of the penguin colonies made them the most likely source. “It’s well known that sea birds give off ammonia. You can smell them. The birds stink,” he said. “But we didn’t know how much there was. So what we did with this study was to quantify ammonia and to quantify its impact on the cloud formation process.” The scientists had to wait until the wind blew from the penguin colony toward the research station. “If we’re lucky, the wind blows from that direction and not from the direction of the power generator,” he said. “And we were lucky enough that we had one specific event where the winds from the penguin colony persisted long enough that we were actually able to track the growth of the particles. You could be there for a year, and it might not happen.” The ammonia from the guano does not form the particles but supercharges the process that does, Boyer said. “It’s really the dimethyl sulfide from phytoplankton that gives off the sulfur,” he said. “The ammonia enhances the formation rate of particles. Without ammonia, sulfuric acid can form new particles, but with ammonia, it’s 1,000 times faster, and sometimes even more, so we’re talking up to four orders of magnitude faster because of the guano.” This is important in Antarctica specifically because there are not many other sources of particles, such as pollution or emissions from trees, he added. “So the strength of the source matters in terms of its climate effect over time,” he said. “And if the source changes, it’s going to change the climate effect.” It will take more research to determine if penguin guano has a net cooling effect on the climate. But in general, he said, if the particles transport out to sea and contribute to cloud formation, they will have a cooling effect. “What’s also interesting,” he said, “is if the clouds are over ice surfaces, it could actually lead to warming because the clouds are less reflective than the ice beneath.” In that case, the clouds could actually reduce the amount of heat that brighter ice would otherwise reflect away from the planet. The study did not try to measure that effect, but it could be an important subject for future research, he added. The guano effect lingers even after the birds leave the breeding areas. A month after they were gone, Boyer said ammonia levels in the air were still 1,000 times higher than the baseline. “The emission of ammonia is a temperature-dependent process, so it’s likely that once wintertime comes, the ammonia gets frozen in,” he said. “But even before the penguins come back, I would hypothesize that as the temperature warms, the guano starts to emit ammonia again. And the penguins move all around the coast, so it’s possible they’re just fertilizing an entire coast with ammonia.” Bob Berwyn, Inside Climate News 4 Comments
    0 Comments 0 Shares 0 Reviews
  • Our Solar System May Have a New Dwarf Planet Orbiting Even Farther Than Pluto

    So many unexplored secrets still lie at the outskirts of our solar system, where a potential candidate for a new dwarf planet lies. Although space beyond Neptune was thought to be mostly devoid of large objects, researchers are beginning to rethink this assumption after coming across an extraordinary trans-Neptunian object, called 2017 OF201. According to a recently published arXiv pre-print, 2017 OF201 could soon join the ranks of Pluto and other dwarf planets in the solar system. The behavior of its extremely large orbit has piqued the interest of astronomers, who now believe there may be plenty more objects just like it drifting through this remote part of space. Where are Dwarf Planets Located?Composite image showing the five dwarf planets recognized by the International Astronomical Union, plus the newly
    discovered trans-Neptunian object 2017 OF201.The Kuiper Belt, a region of the solar system past Neptune’s orbit, is likely home to hundreds of thousands — if not millions — of icy objects that vary in shape and size. Over 2,000 trans-Neptunian objectshave been observed here, but scientists believe that this figure doesn’t even scratch the surface of this area’s extraterrestrial riches. The most famous resident of the Kuiper Belt, without a doubt, is Pluto. Other dwarf planets have also been found in the area, such as Eris, Haumea, and Makemake. But why do Pluto and its fellow dwarf planets not enjoy the same status as the solar system’s eight regular planets? To officially be considered a planet, an object must follow three rules set by the International Astronomical Union in 2006: It must orbit a host star, be mostly round, and be large enough to clear away objects of a similar size near its orbit. Dwarf planets like Pluto follow the first two rules, but they cannot “clear the neighborhood” near their orbits. The Extreme Orbit of 2017 OF201Scientists have been eager to uncover more TNOs in the Kuiper Belt, which is what led to the discovery of 2017 OF201. The object was identified based on bright spots in an astronomical image database from the Victor M. Blanco Telescope and Canada-France-Hawaii Telescope. Assessing exposures over seven years, the researchers were led to 2017 OF201, which is one of the most distant visible objects in our solar system at this point. The most significant aspect of 2017 OF201 appears to be its extreme orbit. “The object’s aphelion — the farthest point on the orbit from the Sun — is more than 1600 times that of the Earth’s orbit,” said author Sihao Cheng of the Institute for Advanced Study in Princeton, NJ, in a press statement. “Meanwhile, its perihelion — the closest point on its orbit to the Sun — is 44.5 times that of the Earth’s orbit, similar to Pluto's orbit.”The researchers estimate the object’s diameter to be 700 km, “which would make it the second largest known object in a wide orbit," according to the statement. Pluto’s diameter, for reference, is 2,377 km. Mysteries of the Kuiper BeltThe object’s orbit, which takes around 25,000 years to complete, may be the result of an encounter with a larger planet that sent it far into space. The object also doesn’t show signs of clustering in a specific orientation, something commonly observed with other TNOs. Clustering has often been referenced as indirect evidence for the existence of a hypothetical ninth planet in the outer solar system. But since 2017 OF201 doesn’t follow the same pattern as other TNOs, it may stand against this hypothesis. The researchers hope to gather more details on 2017 OF201 in future observations. The excitement doesn't stop at this object, since its discovery hints at an abundance of similar objects in the Kuiper Belt, still waiting to be observed.“2017 OF201 spends only 1 percent of its orbital time close enough to us to be detectable. The presence of this single object suggests that there could be another hundred or so other objects with similar orbit and size; they are just too far away to be detectable now,” said Cheng in a press release. “Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Earth and Planetary Astrophysics. Discovery of a dwarf planet candidate in an extremely wide orbit: 2017 OF201NASA. Kuiper Belt FactsNASA. Dwarf PlanetsJack Knudson is an assistant editor at Discover with a strong interest in environmental science and history. Before joining Discover in 2023, he studied journalism at the Scripps College of Communication at Ohio University and previously interned at Recycling Today magazine.
    #our #solar #system #have #new
    Our Solar System May Have a New Dwarf Planet Orbiting Even Farther Than Pluto
    So many unexplored secrets still lie at the outskirts of our solar system, where a potential candidate for a new dwarf planet lies. Although space beyond Neptune was thought to be mostly devoid of large objects, researchers are beginning to rethink this assumption after coming across an extraordinary trans-Neptunian object, called 2017 OF201. According to a recently published arXiv pre-print, 2017 OF201 could soon join the ranks of Pluto and other dwarf planets in the solar system. The behavior of its extremely large orbit has piqued the interest of astronomers, who now believe there may be plenty more objects just like it drifting through this remote part of space. Where are Dwarf Planets Located?Composite image showing the five dwarf planets recognized by the International Astronomical Union, plus the newly discovered trans-Neptunian object 2017 OF201.The Kuiper Belt, a region of the solar system past Neptune’s orbit, is likely home to hundreds of thousands — if not millions — of icy objects that vary in shape and size. Over 2,000 trans-Neptunian objectshave been observed here, but scientists believe that this figure doesn’t even scratch the surface of this area’s extraterrestrial riches. The most famous resident of the Kuiper Belt, without a doubt, is Pluto. Other dwarf planets have also been found in the area, such as Eris, Haumea, and Makemake. But why do Pluto and its fellow dwarf planets not enjoy the same status as the solar system’s eight regular planets? To officially be considered a planet, an object must follow three rules set by the International Astronomical Union in 2006: It must orbit a host star, be mostly round, and be large enough to clear away objects of a similar size near its orbit. Dwarf planets like Pluto follow the first two rules, but they cannot “clear the neighborhood” near their orbits. The Extreme Orbit of 2017 OF201Scientists have been eager to uncover more TNOs in the Kuiper Belt, which is what led to the discovery of 2017 OF201. The object was identified based on bright spots in an astronomical image database from the Victor M. Blanco Telescope and Canada-France-Hawaii Telescope. Assessing exposures over seven years, the researchers were led to 2017 OF201, which is one of the most distant visible objects in our solar system at this point. The most significant aspect of 2017 OF201 appears to be its extreme orbit. “The object’s aphelion — the farthest point on the orbit from the Sun — is more than 1600 times that of the Earth’s orbit,” said author Sihao Cheng of the Institute for Advanced Study in Princeton, NJ, in a press statement. “Meanwhile, its perihelion — the closest point on its orbit to the Sun — is 44.5 times that of the Earth’s orbit, similar to Pluto's orbit.”The researchers estimate the object’s diameter to be 700 km, “which would make it the second largest known object in a wide orbit," according to the statement. Pluto’s diameter, for reference, is 2,377 km. Mysteries of the Kuiper BeltThe object’s orbit, which takes around 25,000 years to complete, may be the result of an encounter with a larger planet that sent it far into space. The object also doesn’t show signs of clustering in a specific orientation, something commonly observed with other TNOs. Clustering has often been referenced as indirect evidence for the existence of a hypothetical ninth planet in the outer solar system. But since 2017 OF201 doesn’t follow the same pattern as other TNOs, it may stand against this hypothesis. The researchers hope to gather more details on 2017 OF201 in future observations. The excitement doesn't stop at this object, since its discovery hints at an abundance of similar objects in the Kuiper Belt, still waiting to be observed.“2017 OF201 spends only 1 percent of its orbital time close enough to us to be detectable. The presence of this single object suggests that there could be another hundred or so other objects with similar orbit and size; they are just too far away to be detectable now,” said Cheng in a press release. “Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Earth and Planetary Astrophysics. Discovery of a dwarf planet candidate in an extremely wide orbit: 2017 OF201NASA. Kuiper Belt FactsNASA. Dwarf PlanetsJack Knudson is an assistant editor at Discover with a strong interest in environmental science and history. Before joining Discover in 2023, he studied journalism at the Scripps College of Communication at Ohio University and previously interned at Recycling Today magazine. #our #solar #system #have #new
    WWW.DISCOVERMAGAZINE.COM
    Our Solar System May Have a New Dwarf Planet Orbiting Even Farther Than Pluto
    So many unexplored secrets still lie at the outskirts of our solar system, where a potential candidate for a new dwarf planet lies. Although space beyond Neptune was thought to be mostly devoid of large objects, researchers are beginning to rethink this assumption after coming across an extraordinary trans-Neptunian object, called 2017 OF201. According to a recently published arXiv pre-print, 2017 OF201 could soon join the ranks of Pluto and other dwarf planets in the solar system. The behavior of its extremely large orbit has piqued the interest of astronomers, who now believe there may be plenty more objects just like it drifting through this remote part of space. Where are Dwarf Planets Located?Composite image showing the five dwarf planets recognized by the International Astronomical Union, plus the newly discovered trans-Neptunian object 2017 OF201. (Image Courtesy of: NASA/JPL Caltech; Sihao Cheng et al.)The Kuiper Belt, a region of the solar system past Neptune’s orbit, is likely home to hundreds of thousands — if not millions — of icy objects that vary in shape and size. Over 2,000 trans-Neptunian objects (TNO) have been observed here, but scientists believe that this figure doesn’t even scratch the surface of this area’s extraterrestrial riches. The most famous resident of the Kuiper Belt, without a doubt, is Pluto. Other dwarf planets have also been found in the area, such as Eris, Haumea, and Makemake. But why do Pluto and its fellow dwarf planets not enjoy the same status as the solar system’s eight regular planets? To officially be considered a planet, an object must follow three rules set by the International Astronomical Union in 2006: It must orbit a host star (like the Sun), be mostly round, and be large enough to clear away objects of a similar size near its orbit (in other words, it has to be “gravitationally dominant”). Dwarf planets like Pluto follow the first two rules, but they cannot “clear the neighborhood” near their orbits. The Extreme Orbit of 2017 OF201Scientists have been eager to uncover more TNOs in the Kuiper Belt, which is what led to the discovery of 2017 OF201. The object was identified based on bright spots in an astronomical image database from the Victor M. Blanco Telescope and Canada-France-Hawaii Telescope. Assessing exposures over seven years, the researchers were led to 2017 OF201, which is one of the most distant visible objects in our solar system at this point. The most significant aspect of 2017 OF201 appears to be its extreme orbit. “The object’s aphelion — the farthest point on the orbit from the Sun — is more than 1600 times that of the Earth’s orbit,” said author Sihao Cheng of the Institute for Advanced Study in Princeton, NJ, in a press statement. “Meanwhile, its perihelion — the closest point on its orbit to the Sun — is 44.5 times that of the Earth’s orbit, similar to Pluto's orbit.”The researchers estimate the object’s diameter to be 700 km [about 435 miles], “which would make it the second largest known object in a wide orbit," according to the statement. Pluto’s diameter, for reference, is 2,377 km [about 1477 miles]. Mysteries of the Kuiper BeltThe object’s orbit, which takes around 25,000 years to complete, may be the result of an encounter with a larger planet that sent it far into space. The object also doesn’t show signs of clustering in a specific orientation, something commonly observed with other TNOs. Clustering has often been referenced as indirect evidence for the existence of a hypothetical ninth planet in the outer solar system (called Planet Nine or Planet X). But since 2017 OF201 doesn’t follow the same pattern as other TNOs, it may stand against this hypothesis. The researchers hope to gather more details on 2017 OF201 in future observations. The excitement doesn't stop at this object, since its discovery hints at an abundance of similar objects in the Kuiper Belt, still waiting to be observed.“2017 OF201 spends only 1 percent of its orbital time close enough to us to be detectable. The presence of this single object suggests that there could be another hundred or so other objects with similar orbit and size; they are just too far away to be detectable now,” said Cheng in a press release. “Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Earth and Planetary Astrophysics. Discovery of a dwarf planet candidate in an extremely wide orbit: 2017 OF201NASA. Kuiper Belt FactsNASA. Dwarf PlanetsJack Knudson is an assistant editor at Discover with a strong interest in environmental science and history. Before joining Discover in 2023, he studied journalism at the Scripps College of Communication at Ohio University and previously interned at Recycling Today magazine.
    0 Comments 0 Shares 0 Reviews
  • New dwarf planet spotted at the edge of the solar system

    The orbits of a potential dwarf planet called 2017 OF201 and the dwarf planet SednaTony Dunn
    A potential dwarf planet has been discovered in the outer reaches of our solar system, orbiting beyond Neptune. Its presence there challenges the existence of a hypothetical body known as Planet 9 or Planet X.
    Sihao Cheng at the Institute for Advanced Study in Princeton, New Jersey, and his colleagues first detected the object, known as 2017 OF201, as a bright spot in an astronomical image database from the Victor M. Blanco Telescope in Chile.
    Advertisement
    2017 OF201 is about 700 kilometres across – big enough to qualify as a dwarf planet like Pluto, which has a diameter about three times as big. The object is currently about 90.5 astronomical unitsaway from us, or roughly 90 times as far from Earth as the sun is.
    Because 2017 OF201’s average orbit around the sun is greater than that of Neptune, it is what’s known as a trans-Neptunian object. It passes through the Kuiper belt, a disc of icy objects in the outer solar system beyond the orbit of Neptune.
    The researchers looked back over 19 observations, taken over seven years by the Canada France Hawaii Telescope, to determine that the closest 2017 OF201 gets to the sun – its perihelion – is 44.5 AU, which is similar to Pluto’s orbit. The furthest it gets from the sun is 1600 AU, way outside the solar system.

    Voyage across the galaxy and beyond with our space newsletter every month.

    Sign up to newsletter

    This far-flung orbit may be the result of an encounter with a giant planet, which ejected the candidate dwarf planet out of the solar system, say the researchers.
    “It’s a really cool discovery,” says Kevin Napier at the University of Michigan. The object would go so far outside the solar system that it could be interacting with other stars in the galaxy just as strongly as it interacts with some of the planets in our solar system, he says.
    The orbits of many extreme TNOs seem to cluster in a specific orientation. This has been interpreted as evidence that the solar system contains a ninth planet hidden in the Oort cloud, a vast cloud of icy rocks encircling the solar system. The idea is that Planet 9’s gravity pushes the TNOs into their specific orbits.
    But the orbit of 2017 OF201 doesn’t fit this pattern. “This object is definitely an outlier to the observed clustering,” says team member Eritas Yang at Princeton University.

    Cheng and his colleagues also modelled simulations of the object’s orbit, and how it might interact with Planet 9. “In the one with Planet X, the object gets ejected after a couple of hundred million years, and without Planet X, it stays,” says Napier. “Certainly, this is not evidence in favour of Planet 9.”
    But until there is more data, the case isn’t closed, says Cheng. “I hope Planet 9 still exists, because that’ll be more interesting.”
    The candidate dwarf planet takes roughly 25,000 years to complete an orbit, which means it spends only about 1 per cent of its time close enough to Earth for us to detect it. “These things are really hard to find because they’re faint, and their orbits are so long and skinny that you can only see them when they’re really close to the sun, and then they immediately head right back out and they’re invisible to us again,” says Napier.
    That means there might be hundreds of such objects out there. The Vera C. Rubin Observatory, due to go online later this year, will look deeper into space and will potentially detect many more objects like this, which should tell us more about them – and whether Planet 9 actually exists.
    Reference:arXiv DOI: 10.48550/arXiv.2505.15806
    Topics:planets
    #new #dwarf #planet #spotted #edge
    New dwarf planet spotted at the edge of the solar system
    The orbits of a potential dwarf planet called 2017 OF201 and the dwarf planet SednaTony Dunn A potential dwarf planet has been discovered in the outer reaches of our solar system, orbiting beyond Neptune. Its presence there challenges the existence of a hypothetical body known as Planet 9 or Planet X. Sihao Cheng at the Institute for Advanced Study in Princeton, New Jersey, and his colleagues first detected the object, known as 2017 OF201, as a bright spot in an astronomical image database from the Victor M. Blanco Telescope in Chile. Advertisement 2017 OF201 is about 700 kilometres across – big enough to qualify as a dwarf planet like Pluto, which has a diameter about three times as big. The object is currently about 90.5 astronomical unitsaway from us, or roughly 90 times as far from Earth as the sun is. Because 2017 OF201’s average orbit around the sun is greater than that of Neptune, it is what’s known as a trans-Neptunian object. It passes through the Kuiper belt, a disc of icy objects in the outer solar system beyond the orbit of Neptune. The researchers looked back over 19 observations, taken over seven years by the Canada France Hawaii Telescope, to determine that the closest 2017 OF201 gets to the sun – its perihelion – is 44.5 AU, which is similar to Pluto’s orbit. The furthest it gets from the sun is 1600 AU, way outside the solar system. Voyage across the galaxy and beyond with our space newsletter every month. Sign up to newsletter This far-flung orbit may be the result of an encounter with a giant planet, which ejected the candidate dwarf planet out of the solar system, say the researchers. “It’s a really cool discovery,” says Kevin Napier at the University of Michigan. The object would go so far outside the solar system that it could be interacting with other stars in the galaxy just as strongly as it interacts with some of the planets in our solar system, he says. The orbits of many extreme TNOs seem to cluster in a specific orientation. This has been interpreted as evidence that the solar system contains a ninth planet hidden in the Oort cloud, a vast cloud of icy rocks encircling the solar system. The idea is that Planet 9’s gravity pushes the TNOs into their specific orbits. But the orbit of 2017 OF201 doesn’t fit this pattern. “This object is definitely an outlier to the observed clustering,” says team member Eritas Yang at Princeton University. Cheng and his colleagues also modelled simulations of the object’s orbit, and how it might interact with Planet 9. “In the one with Planet X, the object gets ejected after a couple of hundred million years, and without Planet X, it stays,” says Napier. “Certainly, this is not evidence in favour of Planet 9.” But until there is more data, the case isn’t closed, says Cheng. “I hope Planet 9 still exists, because that’ll be more interesting.” The candidate dwarf planet takes roughly 25,000 years to complete an orbit, which means it spends only about 1 per cent of its time close enough to Earth for us to detect it. “These things are really hard to find because they’re faint, and their orbits are so long and skinny that you can only see them when they’re really close to the sun, and then they immediately head right back out and they’re invisible to us again,” says Napier. That means there might be hundreds of such objects out there. The Vera C. Rubin Observatory, due to go online later this year, will look deeper into space and will potentially detect many more objects like this, which should tell us more about them – and whether Planet 9 actually exists. Reference:arXiv DOI: 10.48550/arXiv.2505.15806 Topics:planets #new #dwarf #planet #spotted #edge
    WWW.NEWSCIENTIST.COM
    New dwarf planet spotted at the edge of the solar system
    The orbits of a potential dwarf planet called 2017 OF201 and the dwarf planet SednaTony Dunn A potential dwarf planet has been discovered in the outer reaches of our solar system, orbiting beyond Neptune. Its presence there challenges the existence of a hypothetical body known as Planet 9 or Planet X. Sihao Cheng at the Institute for Advanced Study in Princeton, New Jersey, and his colleagues first detected the object, known as 2017 OF201, as a bright spot in an astronomical image database from the Victor M. Blanco Telescope in Chile. Advertisement 2017 OF201 is about 700 kilometres across – big enough to qualify as a dwarf planet like Pluto, which has a diameter about three times as big. The object is currently about 90.5 astronomical units (AU) away from us, or roughly 90 times as far from Earth as the sun is. Because 2017 OF201’s average orbit around the sun is greater than that of Neptune, it is what’s known as a trans-Neptunian object (TNO). It passes through the Kuiper belt, a disc of icy objects in the outer solar system beyond the orbit of Neptune. The researchers looked back over 19 observations, taken over seven years by the Canada France Hawaii Telescope, to determine that the closest 2017 OF201 gets to the sun – its perihelion – is 44.5 AU, which is similar to Pluto’s orbit. The furthest it gets from the sun is 1600 AU, way outside the solar system. Voyage across the galaxy and beyond with our space newsletter every month. Sign up to newsletter This far-flung orbit may be the result of an encounter with a giant planet, which ejected the candidate dwarf planet out of the solar system, say the researchers. “It’s a really cool discovery,” says Kevin Napier at the University of Michigan. The object would go so far outside the solar system that it could be interacting with other stars in the galaxy just as strongly as it interacts with some of the planets in our solar system, he says. The orbits of many extreme TNOs seem to cluster in a specific orientation. This has been interpreted as evidence that the solar system contains a ninth planet hidden in the Oort cloud, a vast cloud of icy rocks encircling the solar system. The idea is that Planet 9’s gravity pushes the TNOs into their specific orbits. But the orbit of 2017 OF201 doesn’t fit this pattern. “This object is definitely an outlier to the observed clustering,” says team member Eritas Yang at Princeton University. Cheng and his colleagues also modelled simulations of the object’s orbit, and how it might interact with Planet 9. “In the one with Planet X, the object gets ejected after a couple of hundred million years, and without Planet X, it stays,” says Napier. “Certainly, this is not evidence in favour of Planet 9.” But until there is more data, the case isn’t closed, says Cheng. “I hope Planet 9 still exists, because that’ll be more interesting.” The candidate dwarf planet takes roughly 25,000 years to complete an orbit, which means it spends only about 1 per cent of its time close enough to Earth for us to detect it. “These things are really hard to find because they’re faint, and their orbits are so long and skinny that you can only see them when they’re really close to the sun, and then they immediately head right back out and they’re invisible to us again,” says Napier. That means there might be hundreds of such objects out there. The Vera C. Rubin Observatory, due to go online later this year, will look deeper into space and will potentially detect many more objects like this, which should tell us more about them – and whether Planet 9 actually exists. Reference:arXiv DOI: 10.48550/arXiv.2505.15806 Topics:planets
    0 Comments 0 Shares 0 Reviews
  • Top Machine Learning Jobs and How to Prepare For Them

    These days, job titles like data scientist, machine learning engineer, and Ai Engineer are everywhere — and if you were anything like me, it can be hard to understand what each of them actually does if you are not working within the field.

    And then there are titles that sound even more confusing — like quantum blockchain LLM robotic engineer.

    The job market is full of buzzwords and overlapping roles, which can make it difficult to know where to start if you’re interested in a career in machine learning.

    In this article, I’ll break down the top machine learning roles and explain what each one involves — plus what you need to do to prepare for them.

    Data Scientist

    What is it?

    A data scientist is the most well-known role, but has the largest range of job responsibilities.

    In general, there are two types of data scientists:

    Analytics and experiment-focused.

    Machine learning and modelling focused.

    The former includes things like running A/B tests, conducting deep dives to determine where the business could improve, and suggesting improvements to machine learning models by identifying their blind spots. A lot of this work is called explanatory data analysis or EDA for short.

    The latter is mainly about building PoC machine learning models and decision systems that benefit the business. Then, working with software and machine learning engineers, to deploy those models to production and monitor their performance.

    Many of the machine learning algorithms will typically be on the simpler side and be regular supervised and unsupervised learning models, like:

    XGBoost

    Linear and logistic regression

    Random forest

    K-means clustering

    I was a data scientist at my old company, but I mainly built machine learning models and didn’t run many A/B tests or experiments. That was work that was carried out by data analysts and product analysts.

    However, at my current company, data scientists don’t build machine learning models but mainly do deep-dive analysis and measure experiments. Model development is mainly done by machine learning engineers.

    It all really comes down to the company. Therefore, it is really important that you read the job description to make sure it’s the right job for you.

    What do they use?

    As a data scientist, these are generally the things you need to know:

    Python and SQL

    Git and GitHub

    Command LineStatistics and maths knowledge

    Basic machine learning skills

    A bit of cloud systemsI have roadmaps on becoming a data scientist that you can check out below if this role interests you.

    How I’d Become a Data ScientistMachine Learning Engineer

    What is it?

    As the title suggests, a machine learning engineer is all about building machine learning models and deploying them into production systems. 

    It originally came from software engineering, but is now its own job/title.

    The significant distinction between machine learning engineers and data scientists is that machine learning engineers deploy the algorithms.

    As leading AI/ML practitioner Chip Huyen puts it:

    The goal of data science is to generate business insights, whereas the goal of ML engineering is to turn data into products.

    You will find that data scientists often come from a strong maths, statistics, or economics background, and machine learning engineers come more from science and engineering backgrounds.

    However, there is a big overlap in this role, and some companies may bundle the data scientist and machine learning engineer positions into a single job, frequently with the data scientist title.

    The machine learning engineer job is typically found in more established tech companies; however, it is slowly becoming more popular over time.

    There also exist further specialisms within the machine learning engineer role, like:

    ML platform engineer

    ML hardware engineer

    ML solutions architect

    Don’t worry about these if you are a beginner, as they are pretty niche and only relevant after a few years of experience in the field. I just wanted to add these so you know the various options out there.

    What do they use?

    The tech stack is quite similar for machine learning engineers as for data scientists, but has more software engineering elements:

    Python and SQL, however, some companies may require other languages. For example, in my current role, Rust is needed.

    Git and GitHub

    Bash and Zsh

    AWS, Azure or GCP

    Software engineering fundamentals like CI/CD, MLOps and Docker.

    Excellent machine learning knowledge, ideally a specialism in an area.

    AI Engineer

    What is it?

    This is a new title that cropped up with all the AI hype going on now, and to be honest, I think it’s an odd title and not really needed. Often, a machine learning engineer will do the role of an AI engineer at most companies.

    Most AI engineer roles are actually about GenAI, not AI as a whole. This distinction normally makes no sense to people outside of the industry. 

    However, AI encompasses almost any decision-making algorithm and is larger than the machine learning field.

    Image by author.

    The current definition of an AI engineer is someone who works mainly with LLM and GenAI tools to help the business.

    They don’t necessarily develop the underlying algorithms from scratch, mainly because it’s hard to do unless you’re in a research lab, and many of the top models are open-sourced, so you don’t need to reinvent the wheel.

    Instead, they focus on adapting and building the product first, then worrying about the model fine-tuning afterwards. So, they wu

    It is a lot closer to traditional software engineering than the machine learning engineer role as it currently stands. Although many machine learning engineers will operate as AI engineers, the job is new and not fully fleshed out yet.

    What do they use?

    This role is evolving quite a bit, but in general, you need good knowledge of all the latest GenAI and LLM trends:

    Solid software engineering skills

    Python, SQL and backend langauges like Java or GO are useful

    CI/CD

    Git

    LLMs and transformers

    RAG

    Prompt engineering

    Foundational models

    Fine tuning

    I also recommend you check out Datacamp’s associates AI engineer for data scientist track, that will also set you up nicely for a career as a data scientist. This is linked in the description below.

    Research Scientist/Engineer

    What is it?

    The previous roles were mainly industry positions, but these next two will be research-based.

    Industry roles are mainly associated with business and are all about generating business value. Whether you use linear regression or a transformer model, what matters is the impact, not necessarily the method.

    Research aims to expand the current knowledge capabilities theoretically and practically. This approach revolves around the scientific method and deep experiments in a niche field.

    The difference between what’s research and industry is vague and often overlaps. For example, a lot of the top research labs are actually big tech companies:

    Meta Research

    Google AI

    Microsoft AI

    These companies initially started to solve business problems, but now have dedicated research sectors, so you may work on industry and research problems. Where one begins and the other ends is not always clear.

    If you are interested in exploring the differences between research and industry more deeply, I recommend you read this document. It’s the first lecture of Stanford’s CS 329S, lecture 1: Understanding machine learning production.

    In general, there are more industry positions than research, as only the large companies can afford the data and computing costs.

    Anyway, as a research engineer or scientist, you will essentially be working on cutting-edge research, pushing the boundaries of machine learning knowledge.

    There is a slight distinction between the two the jobs. As a research scientist, you will need a Phd, but this is not necessarily true for a research engineer.

    A research engineer typically implements the theoretical details and ideas of the research scientist. This role is usually at large, established research companies; in most situations, the research engineer and scientist jobs are the same though.

    Companies may offer the research scientist title as it gives you more “clout” and makes you more likely to take the job.

    What do they use?

    This one is similar to machine learning engineering, but the depth of knowledge and qualifications is often greater.

    Python and SQL

    Git and GitHub

    Bash and Zsh

    AWS, Azure or GCP

    Software engineering fundamentals like CI/CD, MLOps and Docker.

    Excellent machine learning knowledge and a specialism in a cutting-edge area like computer vision, reinforcement learning, LLM, etc.

    PhD or at least a master’s in a relevant discipline.

    Research experience.

    This article has just scratched the surface of machine learning roles, and there are many more niche jobs and specialisms within these four or five I mentioned.

    I always recommend starting your career by getting your foot in the door and then pivoting to the direction you want to go. This strategy is much more effective than tunnel vision for only one role.

    Another thing!

    I offer 1:1 coaching calls where we can chat about whatever you need — whether it’s projects, Career Advice, or just figuring out your next step. I’m here to help you move forward!

    1:1 Mentoring Call with Egor HowellCareer guidance, job advice, project help, resume reviewtopmate.io

    Connect with me

    YouTube

    LinkedIn

    Instagram

    Website

    The post Top Machine Learning Jobs and How to Prepare For Them appeared first on Towards Data Science.
    #top #machine #learning #jobs #how
    Top Machine Learning Jobs and How to Prepare For Them
    These days, job titles like data scientist, machine learning engineer, and Ai Engineer are everywhere — and if you were anything like me, it can be hard to understand what each of them actually does if you are not working within the field. And then there are titles that sound even more confusing — like quantum blockchain LLM robotic engineer. The job market is full of buzzwords and overlapping roles, which can make it difficult to know where to start if you’re interested in a career in machine learning. In this article, I’ll break down the top machine learning roles and explain what each one involves — plus what you need to do to prepare for them. Data Scientist What is it? A data scientist is the most well-known role, but has the largest range of job responsibilities. In general, there are two types of data scientists: Analytics and experiment-focused. Machine learning and modelling focused. The former includes things like running A/B tests, conducting deep dives to determine where the business could improve, and suggesting improvements to machine learning models by identifying their blind spots. A lot of this work is called explanatory data analysis or EDA for short. The latter is mainly about building PoC machine learning models and decision systems that benefit the business. Then, working with software and machine learning engineers, to deploy those models to production and monitor their performance. Many of the machine learning algorithms will typically be on the simpler side and be regular supervised and unsupervised learning models, like: XGBoost Linear and logistic regression Random forest K-means clustering I was a data scientist at my old company, but I mainly built machine learning models and didn’t run many A/B tests or experiments. That was work that was carried out by data analysts and product analysts. However, at my current company, data scientists don’t build machine learning models but mainly do deep-dive analysis and measure experiments. Model development is mainly done by machine learning engineers. It all really comes down to the company. Therefore, it is really important that you read the job description to make sure it’s the right job for you. What do they use? As a data scientist, these are generally the things you need to know: Python and SQL Git and GitHub Command LineStatistics and maths knowledge Basic machine learning skills A bit of cloud systemsI have roadmaps on becoming a data scientist that you can check out below if this role interests you. How I’d Become a Data ScientistMachine Learning Engineer What is it? As the title suggests, a machine learning engineer is all about building machine learning models and deploying them into production systems.  It originally came from software engineering, but is now its own job/title. The significant distinction between machine learning engineers and data scientists is that machine learning engineers deploy the algorithms. As leading AI/ML practitioner Chip Huyen puts it: The goal of data science is to generate business insights, whereas the goal of ML engineering is to turn data into products. You will find that data scientists often come from a strong maths, statistics, or economics background, and machine learning engineers come more from science and engineering backgrounds. However, there is a big overlap in this role, and some companies may bundle the data scientist and machine learning engineer positions into a single job, frequently with the data scientist title. The machine learning engineer job is typically found in more established tech companies; however, it is slowly becoming more popular over time. There also exist further specialisms within the machine learning engineer role, like: ML platform engineer ML hardware engineer ML solutions architect Don’t worry about these if you are a beginner, as they are pretty niche and only relevant after a few years of experience in the field. I just wanted to add these so you know the various options out there. What do they use? The tech stack is quite similar for machine learning engineers as for data scientists, but has more software engineering elements: Python and SQL, however, some companies may require other languages. For example, in my current role, Rust is needed. Git and GitHub Bash and Zsh AWS, Azure or GCP Software engineering fundamentals like CI/CD, MLOps and Docker. Excellent machine learning knowledge, ideally a specialism in an area. AI Engineer What is it? This is a new title that cropped up with all the AI hype going on now, and to be honest, I think it’s an odd title and not really needed. Often, a machine learning engineer will do the role of an AI engineer at most companies. Most AI engineer roles are actually about GenAI, not AI as a whole. This distinction normally makes no sense to people outside of the industry.  However, AI encompasses almost any decision-making algorithm and is larger than the machine learning field. Image by author. The current definition of an AI engineer is someone who works mainly with LLM and GenAI tools to help the business. They don’t necessarily develop the underlying algorithms from scratch, mainly because it’s hard to do unless you’re in a research lab, and many of the top models are open-sourced, so you don’t need to reinvent the wheel. Instead, they focus on adapting and building the product first, then worrying about the model fine-tuning afterwards. So, they wu It is a lot closer to traditional software engineering than the machine learning engineer role as it currently stands. Although many machine learning engineers will operate as AI engineers, the job is new and not fully fleshed out yet. What do they use? This role is evolving quite a bit, but in general, you need good knowledge of all the latest GenAI and LLM trends: Solid software engineering skills Python, SQL and backend langauges like Java or GO are useful CI/CD Git LLMs and transformers RAG Prompt engineering Foundational models Fine tuning I also recommend you check out Datacamp’s associates AI engineer for data scientist track, that will also set you up nicely for a career as a data scientist. This is linked in the description below. Research Scientist/Engineer What is it? The previous roles were mainly industry positions, but these next two will be research-based. Industry roles are mainly associated with business and are all about generating business value. Whether you use linear regression or a transformer model, what matters is the impact, not necessarily the method. Research aims to expand the current knowledge capabilities theoretically and practically. This approach revolves around the scientific method and deep experiments in a niche field. The difference between what’s research and industry is vague and often overlaps. For example, a lot of the top research labs are actually big tech companies: Meta Research Google AI Microsoft AI These companies initially started to solve business problems, but now have dedicated research sectors, so you may work on industry and research problems. Where one begins and the other ends is not always clear. If you are interested in exploring the differences between research and industry more deeply, I recommend you read this document. It’s the first lecture of Stanford’s CS 329S, lecture 1: Understanding machine learning production. In general, there are more industry positions than research, as only the large companies can afford the data and computing costs. Anyway, as a research engineer or scientist, you will essentially be working on cutting-edge research, pushing the boundaries of machine learning knowledge. There is a slight distinction between the two the jobs. As a research scientist, you will need a Phd, but this is not necessarily true for a research engineer. A research engineer typically implements the theoretical details and ideas of the research scientist. This role is usually at large, established research companies; in most situations, the research engineer and scientist jobs are the same though. Companies may offer the research scientist title as it gives you more “clout” and makes you more likely to take the job. What do they use? This one is similar to machine learning engineering, but the depth of knowledge and qualifications is often greater. Python and SQL Git and GitHub Bash and Zsh AWS, Azure or GCP Software engineering fundamentals like CI/CD, MLOps and Docker. Excellent machine learning knowledge and a specialism in a cutting-edge area like computer vision, reinforcement learning, LLM, etc. PhD or at least a master’s in a relevant discipline. Research experience. This article has just scratched the surface of machine learning roles, and there are many more niche jobs and specialisms within these four or five I mentioned. I always recommend starting your career by getting your foot in the door and then pivoting to the direction you want to go. This strategy is much more effective than tunnel vision for only one role. Another thing! I offer 1:1 coaching calls where we can chat about whatever you need — whether it’s projects, Career Advice, or just figuring out your next step. I’m here to help you move forward! 1:1 Mentoring Call with Egor HowellCareer guidance, job advice, project help, resume reviewtopmate.io Connect with me YouTube LinkedIn Instagram Website The post Top Machine Learning Jobs and How to Prepare For Them appeared first on Towards Data Science. #top #machine #learning #jobs #how
    TOWARDSDATASCIENCE.COM
    Top Machine Learning Jobs and How to Prepare For Them
    These days, job titles like data scientist, machine learning engineer, and Ai Engineer are everywhere — and if you were anything like me, it can be hard to understand what each of them actually does if you are not working within the field. And then there are titles that sound even more confusing — like quantum blockchain LLM robotic engineer (okay, I made that one up, but you get the point). The job market is full of buzzwords and overlapping roles, which can make it difficult to know where to start if you’re interested in a career in machine learning. In this article, I’ll break down the top machine learning roles and explain what each one involves — plus what you need to do to prepare for them. Data Scientist What is it? A data scientist is the most well-known role, but has the largest range of job responsibilities. In general, there are two types of data scientists: Analytics and experiment-focused. Machine learning and modelling focused. The former includes things like running A/B tests, conducting deep dives to determine where the business could improve, and suggesting improvements to machine learning models by identifying their blind spots. A lot of this work is called explanatory data analysis or EDA for short. The latter is mainly about building PoC machine learning models and decision systems that benefit the business. Then, working with software and machine learning engineers, to deploy those models to production and monitor their performance. Many of the machine learning algorithms will typically be on the simpler side and be regular supervised and unsupervised learning models, like: XGBoost Linear and logistic regression Random forest K-means clustering I was a data scientist at my old company, but I mainly built machine learning models and didn’t run many A/B tests or experiments. That was work that was carried out by data analysts and product analysts. However, at my current company, data scientists don’t build machine learning models but mainly do deep-dive analysis and measure experiments. Model development is mainly done by machine learning engineers. It all really comes down to the company. Therefore, it is really important that you read the job description to make sure it’s the right job for you. What do they use? As a data scientist, these are generally the things you need to know (it’s not exhaustive and will vary by role): Python and SQL Git and GitHub Command Line (Bash and Zsh) Statistics and maths knowledge Basic machine learning skills A bit of cloud systems (AWS, Azure, GCP) I have roadmaps on becoming a data scientist that you can check out below if this role interests you. How I’d Become a Data Scientist (If I Had to Start Over) Machine Learning Engineer What is it? As the title suggests, a machine learning engineer is all about building machine learning models and deploying them into production systems.  It originally came from software engineering, but is now its own job/title. The significant distinction between machine learning engineers and data scientists is that machine learning engineers deploy the algorithms. As leading AI/ML practitioner Chip Huyen puts it: The goal of data science is to generate business insights, whereas the goal of ML engineering is to turn data into products. You will find that data scientists often come from a strong maths, statistics, or economics background, and machine learning engineers come more from science and engineering backgrounds. However, there is a big overlap in this role, and some companies may bundle the data scientist and machine learning engineer positions into a single job, frequently with the data scientist title. The machine learning engineer job is typically found in more established tech companies; however, it is slowly becoming more popular over time. There also exist further specialisms within the machine learning engineer role, like: ML platform engineer ML hardware engineer ML solutions architect Don’t worry about these if you are a beginner, as they are pretty niche and only relevant after a few years of experience in the field. I just wanted to add these so you know the various options out there. What do they use? The tech stack is quite similar for machine learning engineers as for data scientists, but has more software engineering elements: Python and SQL, however, some companies may require other languages. For example, in my current role, Rust is needed. Git and GitHub Bash and Zsh AWS, Azure or GCP Software engineering fundamentals like CI/CD, MLOps and Docker. Excellent machine learning knowledge, ideally a specialism in an area. AI Engineer What is it? This is a new title that cropped up with all the AI hype going on now, and to be honest, I think it’s an odd title and not really needed. Often, a machine learning engineer will do the role of an AI engineer at most companies. Most AI engineer roles are actually about GenAI, not AI as a whole. This distinction normally makes no sense to people outside of the industry.  However, AI encompasses almost any decision-making algorithm and is larger than the machine learning field. Image by author. The current definition of an AI engineer is someone who works mainly with LLM and GenAI tools to help the business. They don’t necessarily develop the underlying algorithms from scratch, mainly because it’s hard to do unless you’re in a research lab, and many of the top models are open-sourced, so you don’t need to reinvent the wheel. Instead, they focus on adapting and building the product first, then worrying about the model fine-tuning afterwards. So, they wu It is a lot closer to traditional software engineering than the machine learning engineer role as it currently stands. Although many machine learning engineers will operate as AI engineers, the job is new and not fully fleshed out yet. What do they use? This role is evolving quite a bit, but in general, you need good knowledge of all the latest GenAI and LLM trends: Solid software engineering skills Python, SQL and backend langauges like Java or GO are useful CI/CD Git LLMs and transformers RAG Prompt engineering Foundational models Fine tuning I also recommend you check out Datacamp’s associates AI engineer for data scientist track, that will also set you up nicely for a career as a data scientist. This is linked in the description below. Research Scientist/Engineer What is it? The previous roles were mainly industry positions, but these next two will be research-based. Industry roles are mainly associated with business and are all about generating business value. Whether you use linear regression or a transformer model, what matters is the impact, not necessarily the method. Research aims to expand the current knowledge capabilities theoretically and practically. This approach revolves around the scientific method and deep experiments in a niche field. The difference between what’s research and industry is vague and often overlaps. For example, a lot of the top research labs are actually big tech companies: Meta Research Google AI Microsoft AI These companies initially started to solve business problems, but now have dedicated research sectors, so you may work on industry and research problems. Where one begins and the other ends is not always clear. If you are interested in exploring the differences between research and industry more deeply, I recommend you read this document. It’s the first lecture of Stanford’s CS 329S, lecture 1: Understanding machine learning production. In general, there are more industry positions than research, as only the large companies can afford the data and computing costs. Anyway, as a research engineer or scientist, you will essentially be working on cutting-edge research, pushing the boundaries of machine learning knowledge. There is a slight distinction between the two the jobs. As a research scientist, you will need a Phd, but this is not necessarily true for a research engineer. A research engineer typically implements the theoretical details and ideas of the research scientist. This role is usually at large, established research companies; in most situations, the research engineer and scientist jobs are the same though. Companies may offer the research scientist title as it gives you more “clout” and makes you more likely to take the job. What do they use? This one is similar to machine learning engineering, but the depth of knowledge and qualifications is often greater. Python and SQL Git and GitHub Bash and Zsh AWS, Azure or GCP Software engineering fundamentals like CI/CD, MLOps and Docker. Excellent machine learning knowledge and a specialism in a cutting-edge area like computer vision, reinforcement learning, LLM, etc. PhD or at least a master’s in a relevant discipline. Research experience. This article has just scratched the surface of machine learning roles, and there are many more niche jobs and specialisms within these four or five I mentioned. I always recommend starting your career by getting your foot in the door and then pivoting to the direction you want to go. This strategy is much more effective than tunnel vision for only one role. Another thing! I offer 1:1 coaching calls where we can chat about whatever you need — whether it’s projects, Career Advice, or just figuring out your next step. I’m here to help you move forward! 1:1 Mentoring Call with Egor HowellCareer guidance, job advice, project help, resume reviewtopmate.io Connect with me YouTube LinkedIn Instagram Website The post Top Machine Learning Jobs and How to Prepare For Them appeared first on Towards Data Science.
    0 Comments 0 Shares 0 Reviews
  • Unexpected clustering pattern in dwarf galaxies challenges formation models

    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution.
    #unexpected #clustering #pattern #dwarf #galaxies
    Unexpected clustering pattern in dwarf galaxies challenges formation models
    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution. #unexpected #clustering #pattern #dwarf #galaxies
    WWW.NATURE.COM
    Unexpected clustering pattern in dwarf galaxies challenges formation models
    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com