• Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?

    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    June 13, 2025
    NatureSocial Issues
    Grace Ebert

    After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication.
    A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious.
    Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own.
    The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says.
    “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction
    Previous articleNext article
    #humpback #whales #are #approaching #people
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction Previous articleNext article #humpback #whales #are #approaching #people
    WWW.THISISCOLOSSAL.COM
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.” (via PetaPixel) A composite image of at least one bubble ring from each interaction Previous articleNext article
    0 Comentários 0 Compartilhamentos
  • The Sonos Era 300 is 20 percent off in this home speaker sale

    A number of Sonos speakers are on sale right now at Sonos direct and Amazon. This includes the well-regarded Era 300 smart speaker, which is on sale for This particular model is one of Sonos' newest, and it has rarely gone on sale in the past.
    We enjoyed the Era 300 enough to give it a score of 80 in our review. It has excellent sound quality and offers a premium experience that far surpasses other products in the company's lineup, even the Era 100. This is also true when compared to rival speakers like Apple's HomePod.

    It's simple to set up and offers the company's proprietary Trueplay tuning system. This feature optimizes the sound of the speaker to the unique acoustics of a room by leveraging an internal microphone. It measures how sound reflects off surfaces and adjusts the EQ to match. It's pretty nifty.
    As for connectivity, it can pair with another Era 300 speaker for a true stereo experience. It also includes a Bluetooth receiver and line-in options. Of course, the speaker integrates with just about every streaming music service. The built-in mic also allows for voice assistant control, but only with Siri and Alexa. Google Assistant is left out of the party.
    This speaker goes all-in on spatial audio, and the results are mixed. Sometimes it's sublime and sometimes it's kind of eh. This is more of a dig on the technology itself. It has serious potential but is still experiencing growing pains. The only real downside of this speaker is the exorbitant asking price, which has been slightly alleviated by this sale.
    As previously mentioned, other Sonos products are available at a discount. This includes the Sonos Beam Gen 2 soundbar, which is 26 percent off at These deals are available via Sonos itself. There's also an ongoing sale on portable speakers. 

    Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at
    #sonos #era #percent #off #this
    The Sonos Era 300 is 20 percent off in this home speaker sale
    A number of Sonos speakers are on sale right now at Sonos direct and Amazon. This includes the well-regarded Era 300 smart speaker, which is on sale for This particular model is one of Sonos' newest, and it has rarely gone on sale in the past. We enjoyed the Era 300 enough to give it a score of 80 in our review. It has excellent sound quality and offers a premium experience that far surpasses other products in the company's lineup, even the Era 100. This is also true when compared to rival speakers like Apple's HomePod. It's simple to set up and offers the company's proprietary Trueplay tuning system. This feature optimizes the sound of the speaker to the unique acoustics of a room by leveraging an internal microphone. It measures how sound reflects off surfaces and adjusts the EQ to match. It's pretty nifty. As for connectivity, it can pair with another Era 300 speaker for a true stereo experience. It also includes a Bluetooth receiver and line-in options. Of course, the speaker integrates with just about every streaming music service. The built-in mic also allows for voice assistant control, but only with Siri and Alexa. Google Assistant is left out of the party. This speaker goes all-in on spatial audio, and the results are mixed. Sometimes it's sublime and sometimes it's kind of eh. This is more of a dig on the technology itself. It has serious potential but is still experiencing growing pains. The only real downside of this speaker is the exorbitant asking price, which has been slightly alleviated by this sale. As previously mentioned, other Sonos products are available at a discount. This includes the Sonos Beam Gen 2 soundbar, which is 26 percent off at These deals are available via Sonos itself. There's also an ongoing sale on portable speakers.  Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at #sonos #era #percent #off #this
    WWW.ENGADGET.COM
    The Sonos Era 300 is 20 percent off in this home speaker sale
    A number of Sonos speakers are on sale right now at Sonos direct and Amazon. This includes the well-regarded Era 300 smart speaker, which is on sale for $359. This particular model is one of Sonos' newest, and it has rarely gone on sale in the past. We enjoyed the Era 300 enough to give it a score of 80 in our review. It has excellent sound quality and offers a premium experience that far surpasses other products in the company's lineup, even the Era 100. This is also true when compared to rival speakers like Apple's HomePod. It's simple to set up and offers the company's proprietary Trueplay tuning system. This feature optimizes the sound of the speaker to the unique acoustics of a room by leveraging an internal microphone. It measures how sound reflects off surfaces and adjusts the EQ to match. It's pretty nifty. As for connectivity, it can pair with another Era 300 speaker for a true stereo experience. It also includes a Bluetooth receiver and line-in options. Of course, the speaker integrates with just about every streaming music service. The built-in mic also allows for voice assistant control, but only with Siri and Alexa. Google Assistant is left out of the party. This speaker goes all-in on spatial audio, and the results are mixed. Sometimes it's sublime and sometimes it's kind of eh. This is more of a dig on the technology itself. It has serious potential but is still experiencing growing pains. The only real downside of this speaker is the exorbitant asking price, which has been slightly alleviated by this sale. As previously mentioned, other Sonos products are available at a discount. This includes the Sonos Beam Gen 2 soundbar, which is 26 percent off at $369. These deals are available via Sonos itself. There's also an ongoing sale on portable speakers.  Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/the-sonos-era-300-is-20-percent-off-in-this-home-speaker-sale-150857725.html?src=rss
    0 Comentários 0 Compartilhamentos
  • A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)

    In this tutorial, we implement the Agent Communication Protocolthrough building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling.
    import google.generativeai as genai
    import json
    import time
    import uuid
    from enum import Enum
    from typing import Dict, List, Any, Optional
    from dataclasses import dataclass, asdict

    GEMINI_API_KEY = "Use Your Gemini API Key"
    genai.configureWe import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model.
    class ACPMessageType:
    """Standard ACP message types"""
    REQUEST = "request"
    RESPONSE = "response"
    INFORM = "inform"
    QUERY = "query"
    SUBSCRIBE = "subscribe"
    UNSUBSCRIBE = "unsubscribe"
    ERROR = "error"
    ACK = "acknowledge"
    The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system.
    class ACPPerformative:
    """ACP speech acts"""
    TELL = "tell"
    ASK = "ask"
    REPLY = "reply"
    REQUEST_ACTION = "request-action"
    AGREE = "agree"
    REFUSE = "refuse"
    PROPOSE = "propose"
    ACCEPT = "accept"
    REJECT = "reject"
    The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication.

    @dataclass
    class ACPMessage:
    """Agent Communication Protocol Message Structure"""
    message_id: str
    sender: str
    receiver: str
    performative: str
    content: Dictprotocol: str = "ACP-1.0"
    conversation_id: str = None
    reply_to: str = None
    language: str = "english"
    encoding: str = "json"
    timestamp: float = None

    def __post_init__:
    if self.timestamp is None:
    self.timestamp = time.timeif self.conversation_id is None:
    self.conversation_id = str)

    def to_acp_format-> str:
    """Convert to standard ACP message format"""
    acp_msg = {
    "message-id": self.message_id,
    "sender": self.sender,
    "receiver": self.receiver,
    "performative": self.performative,
    "content": self.content,
    "protocol": self.protocol,
    "conversation-id": self.conversation_id,
    "reply-to": self.reply_to,
    "language": self.language,
    "encoding": self.encoding,
    "timestamp": self.timestamp
    }
    return json.dumps@classmethod
    def from_acp_format-> 'ACPMessage':
    """Parse ACP message from string format"""
    data = json.loadsreturn cls,
    conversation_id=data.get,
    reply_to=data.get,
    language=data.get,
    encoding=data.get,
    timestamp=data.get)
    )

    The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing.
    class ACPAgent:
    """Agent implementing Agent Communication Protocol"""

    def __init__:
    self.agent_id = agent_id
    self.name = name
    self.capabilities = capabilities
    self.model = genai.GenerativeModelself.message_queue: List=self.subscriptions: Dict] = {}
    self.conversations: Dict] = {}

    def create_message-> ACPMessage:
    """Create a new ACP-compliant message"""
    return ACPMessage),
    sender=self.agent_id,
    receiver=receiver,
    performative=performative,
    content=content,
    conversation_id=conversation_id,
    reply_to=reply_to
    )

    def send_inform-> ACPMessage:
    """Send an INFORM message"""
    content = {"fact": fact, "data": data}
    return self.create_messagedef send_query-> ACPMessage:
    """Send a QUERY message"""
    content = {"question": question, "query-type": query_type}
    return self.create_messagedef send_request-> ACPMessage:
    """Send a REQUEST message"""
    content = {"action": action, "parameters": parameters or {}}
    return self.create_messagedef send_reply-> ACPMessage:
    """Send a REPLY message in response to another message"""
    content = {"response": response_data, "original-question": original_msg.content}
    return self.create_messagedef process_message-> Optional:
    """Process incoming ACP message and generate appropriate response"""
    self.message_queue.appendconv_id = message.conversation_id
    if conv_id not in self.conversations:
    self.conversations=self.conversations.appendif message.performative == ACPPerformative.ASK.value:
    return self._handle_queryelif message.performative == ACPPerformative.REQUEST_ACTION.value:
    return self._handle_requestelif message.performative == ACPPerformative.TELL.value:
    return self._handle_informreturn None

    def _handle_query-> ACPMessage:
    """Handle incoming query messages"""
    question = message.content.getprompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}"
    try:
    response = self.model.generate_contentanswer = response.text.stripexcept:
    answer = "Unable to process query at this time"

    return self.send_replydef _handle_request-> ACPMessage:
    """Handle incoming action requests"""
    action = message.content.getparameters = message.content.getif anyfor capability in self.capabilities):
    result = f"Executing {action} with parameters {parameters}"
    status = "agreed"
    else:
    result = f"Cannot perform {action} - not in my capabilities"
    status = "refused"

    return self.send_replydef _handle_inform-> Optional:
    """Handle incoming information messages"""
    fact = message.content.getprintack_content = {"status": "received", "fact": fact}
    return self.create_messageThe ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methodsto construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages.
    class ACPMessageBroker:
    """Message broker implementing ACP routing and delivery"""

    def __init__:
    self.agents: Dict= {}
    self.message_log: List=self.routing_table: Dict= {}

    def register_agent:
    """Register an agent with the message broker"""
    self.agents= agent
    self.routing_table= "local"
    print")

    def route_message-> bool:
    """Route ACP message to appropriate recipient"""
    if message.receiver not in self.agents:
    printreturn False

    printprintprintprint}")

    receiver_agent = self.agentsresponse = receiver_agent.process_messageself.message_log.appendif response:
    printprintprint}")

    if response.receiver in self.agents:
    self.agents.process_messageself.message_log.appendreturn True

    def broadcast_message:
    """Broadcast message to multiple recipients"""
    for recipient in recipients:
    msg_copy = ACPMessage),
    sender=message.sender,
    receiver=recipient,
    performative=message.performative,
    content=message.content.copy,
    conversation_id=message.conversation_id
    )
    self.route_messageThe ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message.
    def demonstrate_acp:
    """Comprehensive demonstration of Agent Communication Protocol"""

    printDEMONSTRATION")
    printbroker = ACPMessageBrokerresearcher = ACPAgentassistant = ACPAgentcalculator = ACPAgentbroker.register_agentbroker.register_agentbroker.register_agentprintfor agent_id, agent in broker.agents.items:
    print: {', '.join}")

    print")
    query_msg = assistant.send_querybroker.route_messageprint")
    calc_request = researcher.send_request+ 10"})
    broker.route_messageprint")
    info_msg = researcher.send_informbroker.route_messageprintprint}")
    print)}")
    print)}")

    printsample_msg = assistant.send_queryprint)
    The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents, registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol.
    def setup_guide:
    print ACP PROTOCOL FEATURES:

    • Standardized message format with required fields
    • Speech act performatives• Conversation tracking and message threading
    • Error handling and acknowledgments
    • Message routing and delivery confirmation

    EXTEND THE PROTOCOL:
    ```python
    # Create custom agent
    my_agent = ACPAgentbroker.register_agent# Send custom message
    msg = my_agent.send_querybroker.route_message```
    """)

    if __name__ == "__main__":
    setup_guidedemonstrate_acpFinally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages.
    In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms.

    Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features
    #coding #guide #building #scalable #multiagent
    A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)
    In this tutorial, we implement the Agent Communication Protocolthrough building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling. import google.generativeai as genai import json import time import uuid from enum import Enum from typing import Dict, List, Any, Optional from dataclasses import dataclass, asdict GEMINI_API_KEY = "Use Your Gemini API Key" genai.configureWe import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model. class ACPMessageType: """Standard ACP message types""" REQUEST = "request" RESPONSE = "response" INFORM = "inform" QUERY = "query" SUBSCRIBE = "subscribe" UNSUBSCRIBE = "unsubscribe" ERROR = "error" ACK = "acknowledge" The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system. class ACPPerformative: """ACP speech acts""" TELL = "tell" ASK = "ask" REPLY = "reply" REQUEST_ACTION = "request-action" AGREE = "agree" REFUSE = "refuse" PROPOSE = "propose" ACCEPT = "accept" REJECT = "reject" The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication. @dataclass class ACPMessage: """Agent Communication Protocol Message Structure""" message_id: str sender: str receiver: str performative: str content: Dictprotocol: str = "ACP-1.0" conversation_id: str = None reply_to: str = None language: str = "english" encoding: str = "json" timestamp: float = None def __post_init__: if self.timestamp is None: self.timestamp = time.timeif self.conversation_id is None: self.conversation_id = str) def to_acp_format-> str: """Convert to standard ACP message format""" acp_msg = { "message-id": self.message_id, "sender": self.sender, "receiver": self.receiver, "performative": self.performative, "content": self.content, "protocol": self.protocol, "conversation-id": self.conversation_id, "reply-to": self.reply_to, "language": self.language, "encoding": self.encoding, "timestamp": self.timestamp } return json.dumps@classmethod def from_acp_format-> 'ACPMessage': """Parse ACP message from string format""" data = json.loadsreturn cls, conversation_id=data.get, reply_to=data.get, language=data.get, encoding=data.get, timestamp=data.get) ) The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing. class ACPAgent: """Agent implementing Agent Communication Protocol""" def __init__: self.agent_id = agent_id self.name = name self.capabilities = capabilities self.model = genai.GenerativeModelself.message_queue: List=self.subscriptions: Dict] = {} self.conversations: Dict] = {} def create_message-> ACPMessage: """Create a new ACP-compliant message""" return ACPMessage), sender=self.agent_id, receiver=receiver, performative=performative, content=content, conversation_id=conversation_id, reply_to=reply_to ) def send_inform-> ACPMessage: """Send an INFORM message""" content = {"fact": fact, "data": data} return self.create_messagedef send_query-> ACPMessage: """Send a QUERY message""" content = {"question": question, "query-type": query_type} return self.create_messagedef send_request-> ACPMessage: """Send a REQUEST message""" content = {"action": action, "parameters": parameters or {}} return self.create_messagedef send_reply-> ACPMessage: """Send a REPLY message in response to another message""" content = {"response": response_data, "original-question": original_msg.content} return self.create_messagedef process_message-> Optional: """Process incoming ACP message and generate appropriate response""" self.message_queue.appendconv_id = message.conversation_id if conv_id not in self.conversations: self.conversations=self.conversations.appendif message.performative == ACPPerformative.ASK.value: return self._handle_queryelif message.performative == ACPPerformative.REQUEST_ACTION.value: return self._handle_requestelif message.performative == ACPPerformative.TELL.value: return self._handle_informreturn None def _handle_query-> ACPMessage: """Handle incoming query messages""" question = message.content.getprompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}" try: response = self.model.generate_contentanswer = response.text.stripexcept: answer = "Unable to process query at this time" return self.send_replydef _handle_request-> ACPMessage: """Handle incoming action requests""" action = message.content.getparameters = message.content.getif anyfor capability in self.capabilities): result = f"Executing {action} with parameters {parameters}" status = "agreed" else: result = f"Cannot perform {action} - not in my capabilities" status = "refused" return self.send_replydef _handle_inform-> Optional: """Handle incoming information messages""" fact = message.content.getprintack_content = {"status": "received", "fact": fact} return self.create_messageThe ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methodsto construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages. class ACPMessageBroker: """Message broker implementing ACP routing and delivery""" def __init__: self.agents: Dict= {} self.message_log: List=self.routing_table: Dict= {} def register_agent: """Register an agent with the message broker""" self.agents= agent self.routing_table= "local" print") def route_message-> bool: """Route ACP message to appropriate recipient""" if message.receiver not in self.agents: printreturn False printprintprintprint}") receiver_agent = self.agentsresponse = receiver_agent.process_messageself.message_log.appendif response: printprintprint}") if response.receiver in self.agents: self.agents.process_messageself.message_log.appendreturn True def broadcast_message: """Broadcast message to multiple recipients""" for recipient in recipients: msg_copy = ACPMessage), sender=message.sender, receiver=recipient, performative=message.performative, content=message.content.copy, conversation_id=message.conversation_id ) self.route_messageThe ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message. def demonstrate_acp: """Comprehensive demonstration of Agent Communication Protocol""" printDEMONSTRATION") printbroker = ACPMessageBrokerresearcher = ACPAgentassistant = ACPAgentcalculator = ACPAgentbroker.register_agentbroker.register_agentbroker.register_agentprintfor agent_id, agent in broker.agents.items: print: {', '.join}") print") query_msg = assistant.send_querybroker.route_messageprint") calc_request = researcher.send_request+ 10"}) broker.route_messageprint") info_msg = researcher.send_informbroker.route_messageprintprint}") print)}") print)}") printsample_msg = assistant.send_queryprint) The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents, registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol. def setup_guide: print🔧 ACP PROTOCOL FEATURES: • Standardized message format with required fields • Speech act performatives• Conversation tracking and message threading • Error handling and acknowledgments • Message routing and delivery confirmation 📝 EXTEND THE PROTOCOL: ```python # Create custom agent my_agent = ACPAgentbroker.register_agent# Send custom message msg = my_agent.send_querybroker.route_message``` """) if __name__ == "__main__": setup_guidedemonstrate_acpFinally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages. In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms. Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features #coding #guide #building #scalable #multiagent
    WWW.MARKTECHPOST.COM
    A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)
    In this tutorial, we implement the Agent Communication Protocol (ACP) through building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling. import google.generativeai as genai import json import time import uuid from enum import Enum from typing import Dict, List, Any, Optional from dataclasses import dataclass, asdict GEMINI_API_KEY = "Use Your Gemini API Key" genai.configure(api_key=GEMINI_API_KEY) We import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model. class ACPMessageType(Enum): """Standard ACP message types""" REQUEST = "request" RESPONSE = "response" INFORM = "inform" QUERY = "query" SUBSCRIBE = "subscribe" UNSUBSCRIBE = "unsubscribe" ERROR = "error" ACK = "acknowledge" The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system. class ACPPerformative(Enum): """ACP speech acts (performatives)""" TELL = "tell" ASK = "ask" REPLY = "reply" REQUEST_ACTION = "request-action" AGREE = "agree" REFUSE = "refuse" PROPOSE = "propose" ACCEPT = "accept" REJECT = "reject" The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication. @dataclass class ACPMessage: """Agent Communication Protocol Message Structure""" message_id: str sender: str receiver: str performative: str content: Dict[str, Any] protocol: str = "ACP-1.0" conversation_id: str = None reply_to: str = None language: str = "english" encoding: str = "json" timestamp: float = None def __post_init__(self): if self.timestamp is None: self.timestamp = time.time() if self.conversation_id is None: self.conversation_id = str(uuid.uuid4()) def to_acp_format(self) -> str: """Convert to standard ACP message format""" acp_msg = { "message-id": self.message_id, "sender": self.sender, "receiver": self.receiver, "performative": self.performative, "content": self.content, "protocol": self.protocol, "conversation-id": self.conversation_id, "reply-to": self.reply_to, "language": self.language, "encoding": self.encoding, "timestamp": self.timestamp } return json.dumps(acp_msg, indent=2) @classmethod def from_acp_format(cls, acp_string: str) -> 'ACPMessage': """Parse ACP message from string format""" data = json.loads(acp_string) return cls( message_id=data["message-id"], sender=data["sender"], receiver=data["receiver"], performative=data["performative"], content=data["content"], protocol=data.get("protocol", "ACP-1.0"), conversation_id=data.get("conversation-id"), reply_to=data.get("reply-to"), language=data.get("language", "english"), encoding=data.get("encoding", "json"), timestamp=data.get("timestamp", time.time()) ) The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing. class ACPAgent: """Agent implementing Agent Communication Protocol""" def __init__(self, agent_id: str, name: str, capabilities: List[str]): self.agent_id = agent_id self.name = name self.capabilities = capabilities self.model = genai.GenerativeModel("gemini-1.5-flash") self.message_queue: List[ACPMessage] = [] self.subscriptions: Dict[str, List[str]] = {} self.conversations: Dict[str, List[ACPMessage]] = {} def create_message(self, receiver: str, performative: str, content: Dict[str, Any], conversation_id: str = None, reply_to: str = None) -> ACPMessage: """Create a new ACP-compliant message""" return ACPMessage( message_id=str(uuid.uuid4()), sender=self.agent_id, receiver=receiver, performative=performative, content=content, conversation_id=conversation_id, reply_to=reply_to ) def send_inform(self, receiver: str, fact: str, data: Any = None) -> ACPMessage: """Send an INFORM message (telling someone a fact)""" content = {"fact": fact, "data": data} return self.create_message(receiver, ACPPerformative.TELL.value, content) def send_query(self, receiver: str, question: str, query_type: str = "yes-no") -> ACPMessage: """Send a QUERY message (asking for information)""" content = {"question": question, "query-type": query_type} return self.create_message(receiver, ACPPerformative.ASK.value, content) def send_request(self, receiver: str, action: str, parameters: Dict = None) -> ACPMessage: """Send a REQUEST message (asking someone to perform an action)""" content = {"action": action, "parameters": parameters or {}} return self.create_message(receiver, ACPPerformative.REQUEST_ACTION.value, content) def send_reply(self, original_msg: ACPMessage, response_data: Any) -> ACPMessage: """Send a REPLY message in response to another message""" content = {"response": response_data, "original-question": original_msg.content} return self.create_message( original_msg.sender, ACPPerformative.REPLY.value, content, conversation_id=original_msg.conversation_id, reply_to=original_msg.message_id ) def process_message(self, message: ACPMessage) -> Optional[ACPMessage]: """Process incoming ACP message and generate appropriate response""" self.message_queue.append(message) conv_id = message.conversation_id if conv_id not in self.conversations: self.conversations[conv_id] = [] self.conversations[conv_id].append(message) if message.performative == ACPPerformative.ASK.value: return self._handle_query(message) elif message.performative == ACPPerformative.REQUEST_ACTION.value: return self._handle_request(message) elif message.performative == ACPPerformative.TELL.value: return self._handle_inform(message) return None def _handle_query(self, message: ACPMessage) -> ACPMessage: """Handle incoming query messages""" question = message.content.get("question", "") prompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}" try: response = self.model.generate_content(prompt) answer = response.text.strip() except: answer = "Unable to process query at this time" return self.send_reply(message, {"answer": answer, "confidence": 0.8}) def _handle_request(self, message: ACPMessage) -> ACPMessage: """Handle incoming action requests""" action = message.content.get("action", "") parameters = message.content.get("parameters", {}) if any(capability in action.lower() for capability in self.capabilities): result = f"Executing {action} with parameters {parameters}" status = "agreed" else: result = f"Cannot perform {action} - not in my capabilities" status = "refused" return self.send_reply(message, {"status": status, "result": result}) def _handle_inform(self, message: ACPMessage) -> Optional[ACPMessage]: """Handle incoming information messages""" fact = message.content.get("fact", "") print(f"[{self.name}] Received information: {fact}") ack_content = {"status": "received", "fact": fact} return self.create_message(message.sender, "acknowledge", ack_content, conversation_id=message.conversation_id) The ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methods (send_inform, send_query, send_request, send_reply) to construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages. class ACPMessageBroker: """Message broker implementing ACP routing and delivery""" def __init__(self): self.agents: Dict[str, ACPAgent] = {} self.message_log: List[ACPMessage] = [] self.routing_table: Dict[str, str] = {} def register_agent(self, agent: ACPAgent): """Register an agent with the message broker""" self.agents[agent.agent_id] = agent self.routing_table[agent.agent_id] = "local" print(f"✓ Registered agent: {agent.name} ({agent.agent_id})") def route_message(self, message: ACPMessage) -> bool: """Route ACP message to appropriate recipient""" if message.receiver not in self.agents: print(f"✗ Receiver {message.receiver} not found") return False print(f"\n📨 ACP MESSAGE ROUTING:") print(f"From: {message.sender} → To: {message.receiver}") print(f"Performative: {message.performative}") print(f"Content: {json.dumps(message.content, indent=2)}") receiver_agent = self.agents[message.receiver] response = receiver_agent.process_message(message) self.message_log.append(message) if response: print(f"\n📤 GENERATED RESPONSE:") print(f"From: {response.sender} → To: {response.receiver}") print(f"Content: {json.dumps(response.content, indent=2)}") if response.receiver in self.agents: self.agents[response.receiver].process_message(response) self.message_log.append(response) return True def broadcast_message(self, message: ACPMessage, recipients: List[str]): """Broadcast message to multiple recipients""" for recipient in recipients: msg_copy = ACPMessage( message_id=str(uuid.uuid4()), sender=message.sender, receiver=recipient, performative=message.performative, content=message.content.copy(), conversation_id=message.conversation_id ) self.route_message(msg_copy) The ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message. def demonstrate_acp(): """Comprehensive demonstration of Agent Communication Protocol""" print("🤖 AGENT COMMUNICATION PROTOCOL (ACP) DEMONSTRATION") print("=" * 60) broker = ACPMessageBroker() researcher = ACPAgent("agent-001", "Dr. Research", ["analysis", "research", "data-processing"]) assistant = ACPAgent("agent-002", "AI Assistant", ["information", "scheduling", "communication"]) calculator = ACPAgent("agent-003", "MathBot", ["calculation", "mathematics", "computation"]) broker.register_agent(researcher) broker.register_agent(assistant) broker.register_agent(calculator) print(f"\n📋 REGISTERED AGENTS:") for agent_id, agent in broker.agents.items(): print(f" • {agent.name} ({agent_id}): {', '.join(agent.capabilities)}") print(f"\n🔬 SCENARIO 1: Information Query (ASK performative)") query_msg = assistant.send_query("agent-001", "What are the key factors in AI research?") broker.route_message(query_msg) print(f"\n🔢 SCENARIO 2: Action Request (REQUEST-ACTION performative)") calc_request = researcher.send_request("agent-003", "calculate", {"expression": "sqrt(144) + 10"}) broker.route_message(calc_request) print(f"\n📢 SCENARIO 3: Information Sharing (TELL performative)") info_msg = researcher.send_inform("agent-002", "New research paper published on quantum computing") broker.route_message(info_msg) print(f"\n📊 PROTOCOL STATISTICS:") print(f" • Total messages processed: {len(broker.message_log)}") print(f" • Active conversations: {len(set(msg.conversation_id for msg in broker.message_log))}") print(f" • Message types used: {len(set(msg.performative for msg in broker.message_log))}") print(f"\n📋 SAMPLE ACP MESSAGE FORMAT:") sample_msg = assistant.send_query("agent-001", "Sample question for format demonstration") print(sample_msg.to_acp_format()) The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents (Researcher, AI Assistant, and MathBot), registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol. def setup_guide(): print(""" 🚀 GOOGLE COLAB SETUP GUIDE: 1. Get Gemini API Key: https://makersuite.google.com/app/apikey 2. Replace: GEMINI_API_KEY = "YOUR_ACTUAL_API_KEY" 3. Run: demonstrate_acp() 🔧 ACP PROTOCOL FEATURES: • Standardized message format with required fields • Speech act performatives (TELL, ASK, REQUEST-ACTION, etc.) • Conversation tracking and message threading • Error handling and acknowledgments • Message routing and delivery confirmation 📝 EXTEND THE PROTOCOL: ```python # Create custom agent my_agent = ACPAgent("my-001", "CustomBot", ["custom-capability"]) broker.register_agent(my_agent) # Send custom message msg = my_agent.send_query("agent-001", "Your question here") broker.route_message(msg) ``` """) if __name__ == "__main__": setup_guide() demonstrate_acp() Finally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages. In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms. Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features
    0 Comentários 0 Compartilhamentos
  • Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and $45 Case, ft. CEO

    Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and Case, ft. CEOMay 29, 2025Last Updated: 2025-05-29During Computex 2025, Lian Li showed off several new cases that include the Lancool 4, Lancool 217 Infinity, Lian Li O11 Mini V2, and moreThe HighlightsLian Li's Lancool 4 case has gigantic holes cut into the glass for intake fans, coupling airflow with glassThe company’s Shifting Block PSU has a rotating plug that is geared for back-connect motherboardsThe company’s Vector 100 cases are very cheap, starting at without fansTable of ContentsAutoTOC Buy a GN 4-Pack of PC-themed 3D Coasters! These high-quality, durable, flexible coasters ship in a pack of 4, each with a fully custom design made by GN's team. You'll get a motherboard-themed coaster with debug display & reset buttons, a SATA SSD with to-scale connectors, RAM sticks, and a GN logo. These fund our web work! Buy here.IntroWe visited Lian Li during Computex, where the company showed off several of its upcoming products. We think the most interesting one is the Lancool 4, which has fans built into its glass front panel. It’s supposed to be a case that will come with 6 fans.Editor's note: This was originally published on May 19, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangLancool 4The big thing about the Lancool 4 is that it embeds its fans into the front glass panel. This kind of takes us back to about 20 years ago, but instead of glass, the fans were integrated into acrylic and people would take a hole saw and would mount their own fans into it. One of the challenges with this design pertains to potentially reducing the yields with glass breaking being a concern. This wasn’t something that case companies did before, but Lian Li CEO Jameson Chen tells us the glass manufacturing process has improved dramatically lately. The CEO says that the failure rate used to be abysmal but has gotten down to about 5% to accommodate for the curve of the glass. Drilling holes into the glass brings the failure rate down at least another 5%. To mitigate failure rates, Lian Li found that there needs to be at least a 3cm gap between the holes. Chen revealed that the glass is 4mm thick, which is to bolster its quality.  In between the fans are plastic pieces which are used to hide the cables. The fans also use Pogo pins, which are integrated into the bottom of the front panel. When we asked Chen what happens if one of the fans dies, the CEO stated that Lian Li would provide a 5-year warranty. He elaborated that the fans are a new design and that they are 10% fiberglass PBT. Chen also revealed that the fans use fluid dynamic bearings. Considering Lian Li is still prototyping the case, the company is still thinking about whether to put RGB LEDs on the fan blades or to put the RGB LEDs around the fan’s frames. The Lancool 4 has an aluminum top and the rest of the chassis is made of steel, which is 1mm thick.The back glass panel releases via a button. Chen says this was done so that people could open up the glass panel without opening up the bottom side panel. Looking at the design of the rest of the case, it has a lot of similar panels as seen in the Lancool series. It’s got 4 doors and the 2 on the bottom sides are ventilated mesh and there are fan-mount options on the side. While we were there, Chen told us that Lian Li is considering shortening the case from the front to the back a little bit. This would bring the fans in closer to the components. This will benefit an air cooler and GPU. In our experience, performance in shorter cases, in a like-for-like scenario, is better. Chen also thinks the aesthetics of the case would improve as well with a tighter design. The downside is that the case would no longer support 420mm radiators and would support 360mm radiators max. The back panel of the Lancool 4 uses glass, which would normally expose the cable management but the case will come with a cable cover. There would be 2 screws to remove it. A downside here is that there’s less cable-management space to work with.The Lancool 4’s PSU mount is towards the back and bottom of the case. The bottom front has a cut out, which provides some space to route cables. Shifting Block PSU Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.The company also showed off a new interesting power supply, which has a rotating plug. This creates a shifting layout for the cable connections and allows users to re-orient the PSU. Chen tells us it's designed for top and bottom chamber cases and it’s also geared for back-connect motherboards.  Looking at the PSU, it has its 24-pin connectors off on one side. It also has an optional fan and USB 2.0 hub.Lian Li O11 Mini V2Moving on to the Lian Li O11 Mini V2, it has mesh on one of the side panels that’s popped-out about 3mm, which is to accommodate for ATX PSUs that protrude past the frame of the case. The company designed it this way because it had a very specific width it wanted to tackle to avoid the case looking too chunky. Currently, the volume of the case is 45 liters, which includes the feet, but does not include the protruding mesh side panel. The case we saw used bottom intake fans, which are slanted at 25 degrees and the only place for air access is underneath the back panel side. This is coupled with a tiny dust filter on the bottom, which slides out through the back. In terms of other fan mounts, the case has 2 on the side, 1 on the back, and 3 fans can fit in the top. The Lian Li O11 Mini V2 is targeting without fans and with five 120mm fans. Dan Case B4Moving on to Lian Li’s Dan Case B4, we’ve reviewed Dan cases before. The unit we saw at Computex isn’t done yet. We’re told it’s about 60% completed. The case can rotate and has feet and an extension that allows the case to support up to a 360mm radiator. The downside is that about 30% of one of the radiator’s fans would be obstructed by a metal wall. It’s possible that they may perforate this wall to help with cooling. Lian Li is planning to put some mesh or covering on the front panel of the case. The unit we saw was fully exposed and open. What’s interesting about this layout is that the GPU fans are right up against the case’s front intake fans, which is going to be about as cool as you can get for the video card. Most GPUs these days have vertically-oriented fins where the air is going to come out the sides. In this case, air should come out through the punctured side panel but may re-circulate into the back radiator, especially if its fans are intake. If the fans are oriented to be exhaust, that might work better in this case. Lian Li is planning to provide 2x120mm fans along with the case. The case can also be rotated to look like the image above. 217 Infinity CaseLian Li also showed off its 217 Infinity case, which is the 217 case with an updated front and leans on some of the changes that the Lancool 4 has made to get its fans into its front glass panel. The tooling is mostly the same. The things in the back of the case are all basically identical. The changes pertain to the front panel, which have some giant holes in them to accommodate 170mm fans that are 30mm deep. The glass panel has the infinity mirror styling. The only other major change pertains to the IO. Some people complained that the original 217 had its IO on the bottom side, so now the company has moved it to the top with an option to have it on the bottom side. The case comes with 2x170mm front fans and a rear fan. The black version of the case is targeting with a white version targeting  Lian Li Vector SeriesAnother Lian Li case we looked at had some “functional gimmicks.” On the back side, it has a cut-out area that looks like a handle, but definitely isn’t. Instead, there’s a very fine mesh filter that’s an area that’s meant to help with intake. This should also help with GPU cooling. The case is targeted at the system integrator market, but will still be sold at retail. Lian Li is targeting for it without any fans, but includes an 8.8-inch IPS screen that carries a 1720x4080 resolution. Pricing may change in the US based on tariffs. Vector 100 and Vector 100 MiniThe main reason we’re bringing these 2 cases up is price. The Vector 100 is targetingand the Vector 100 Mini, which is geared for MicroATX, is targeting. Lian Li Wireless FansLian Li also showed off its new wireless fans, which comes with a battery pack. There’s currently no price on it, but it’s designed to allow its users to “flex,” as Chen put it. It comes with a built-in receiver. The fans and RGB LEDs use up to 12 volts. In terms of battery life, the CEO says that 3 fans with their LEDs on will last for about 20 minutes. Hydroshift 2 Liquid Cooler Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!The Hydroshift 2 Liquid Cooler has a click actuation ring around the cooler, which can be used as a software-less switch for the display and all of that is pre-written to the device. This means that toggling it doesn’t require software, though you could use software. Compared to Lian Li’s previous Hydroshift 1, the radiator size has been reduced to offer more compatibility but Lian Li says it’s tried to improve flow within the cooler. The company also pushed the micro fins closer to the heat source.
    #lian #lancool #has #fans #glass
    Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and $45 Case, ft. CEO
    Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and Case, ft. CEOMay 29, 2025Last Updated: 2025-05-29During Computex 2025, Lian Li showed off several new cases that include the Lancool 4, Lancool 217 Infinity, Lian Li O11 Mini V2, and moreThe HighlightsLian Li's Lancool 4 case has gigantic holes cut into the glass for intake fans, coupling airflow with glassThe company’s Shifting Block PSU has a rotating plug that is geared for back-connect motherboardsThe company’s Vector 100 cases are very cheap, starting at without fansTable of ContentsAutoTOC Buy a GN 4-Pack of PC-themed 3D Coasters! These high-quality, durable, flexible coasters ship in a pack of 4, each with a fully custom design made by GN's team. You'll get a motherboard-themed coaster with debug display & reset buttons, a SATA SSD with to-scale connectors, RAM sticks, and a GN logo. These fund our web work! Buy here.IntroWe visited Lian Li during Computex, where the company showed off several of its upcoming products. We think the most interesting one is the Lancool 4, which has fans built into its glass front panel. It’s supposed to be a case that will come with 6 fans.Editor's note: This was originally published on May 19, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangLancool 4The big thing about the Lancool 4 is that it embeds its fans into the front glass panel. This kind of takes us back to about 20 years ago, but instead of glass, the fans were integrated into acrylic and people would take a hole saw and would mount their own fans into it. One of the challenges with this design pertains to potentially reducing the yields with glass breaking being a concern. This wasn’t something that case companies did before, but Lian Li CEO Jameson Chen tells us the glass manufacturing process has improved dramatically lately. The CEO says that the failure rate used to be abysmal but has gotten down to about 5% to accommodate for the curve of the glass. Drilling holes into the glass brings the failure rate down at least another 5%. To mitigate failure rates, Lian Li found that there needs to be at least a 3cm gap between the holes. Chen revealed that the glass is 4mm thick, which is to bolster its quality.  In between the fans are plastic pieces which are used to hide the cables. The fans also use Pogo pins, which are integrated into the bottom of the front panel. When we asked Chen what happens if one of the fans dies, the CEO stated that Lian Li would provide a 5-year warranty. He elaborated that the fans are a new design and that they are 10% fiberglass PBT. Chen also revealed that the fans use fluid dynamic bearings. Considering Lian Li is still prototyping the case, the company is still thinking about whether to put RGB LEDs on the fan blades or to put the RGB LEDs around the fan’s frames. The Lancool 4 has an aluminum top and the rest of the chassis is made of steel, which is 1mm thick.The back glass panel releases via a button. Chen says this was done so that people could open up the glass panel without opening up the bottom side panel. Looking at the design of the rest of the case, it has a lot of similar panels as seen in the Lancool series. It’s got 4 doors and the 2 on the bottom sides are ventilated mesh and there are fan-mount options on the side. While we were there, Chen told us that Lian Li is considering shortening the case from the front to the back a little bit. This would bring the fans in closer to the components. This will benefit an air cooler and GPU. In our experience, performance in shorter cases, in a like-for-like scenario, is better. Chen also thinks the aesthetics of the case would improve as well with a tighter design. The downside is that the case would no longer support 420mm radiators and would support 360mm radiators max. The back panel of the Lancool 4 uses glass, which would normally expose the cable management but the case will come with a cable cover. There would be 2 screws to remove it. A downside here is that there’s less cable-management space to work with.The Lancool 4’s PSU mount is towards the back and bottom of the case. The bottom front has a cut out, which provides some space to route cables. Shifting Block PSU Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.The company also showed off a new interesting power supply, which has a rotating plug. This creates a shifting layout for the cable connections and allows users to re-orient the PSU. Chen tells us it's designed for top and bottom chamber cases and it’s also geared for back-connect motherboards.  Looking at the PSU, it has its 24-pin connectors off on one side. It also has an optional fan and USB 2.0 hub.Lian Li O11 Mini V2Moving on to the Lian Li O11 Mini V2, it has mesh on one of the side panels that’s popped-out about 3mm, which is to accommodate for ATX PSUs that protrude past the frame of the case. The company designed it this way because it had a very specific width it wanted to tackle to avoid the case looking too chunky. Currently, the volume of the case is 45 liters, which includes the feet, but does not include the protruding mesh side panel. The case we saw used bottom intake fans, which are slanted at 25 degrees and the only place for air access is underneath the back panel side. This is coupled with a tiny dust filter on the bottom, which slides out through the back. In terms of other fan mounts, the case has 2 on the side, 1 on the back, and 3 fans can fit in the top. The Lian Li O11 Mini V2 is targeting without fans and with five 120mm fans. Dan Case B4Moving on to Lian Li’s Dan Case B4, we’ve reviewed Dan cases before. The unit we saw at Computex isn’t done yet. We’re told it’s about 60% completed. The case can rotate and has feet and an extension that allows the case to support up to a 360mm radiator. The downside is that about 30% of one of the radiator’s fans would be obstructed by a metal wall. It’s possible that they may perforate this wall to help with cooling. Lian Li is planning to put some mesh or covering on the front panel of the case. The unit we saw was fully exposed and open. What’s interesting about this layout is that the GPU fans are right up against the case’s front intake fans, which is going to be about as cool as you can get for the video card. Most GPUs these days have vertically-oriented fins where the air is going to come out the sides. In this case, air should come out through the punctured side panel but may re-circulate into the back radiator, especially if its fans are intake. If the fans are oriented to be exhaust, that might work better in this case. Lian Li is planning to provide 2x120mm fans along with the case. The case can also be rotated to look like the image above. 217 Infinity CaseLian Li also showed off its 217 Infinity case, which is the 217 case with an updated front and leans on some of the changes that the Lancool 4 has made to get its fans into its front glass panel. The tooling is mostly the same. The things in the back of the case are all basically identical. The changes pertain to the front panel, which have some giant holes in them to accommodate 170mm fans that are 30mm deep. The glass panel has the infinity mirror styling. The only other major change pertains to the IO. Some people complained that the original 217 had its IO on the bottom side, so now the company has moved it to the top with an option to have it on the bottom side. The case comes with 2x170mm front fans and a rear fan. The black version of the case is targeting with a white version targeting  Lian Li Vector SeriesAnother Lian Li case we looked at had some “functional gimmicks.” On the back side, it has a cut-out area that looks like a handle, but definitely isn’t. Instead, there’s a very fine mesh filter that’s an area that’s meant to help with intake. This should also help with GPU cooling. The case is targeted at the system integrator market, but will still be sold at retail. Lian Li is targeting for it without any fans, but includes an 8.8-inch IPS screen that carries a 1720x4080 resolution. Pricing may change in the US based on tariffs. Vector 100 and Vector 100 MiniThe main reason we’re bringing these 2 cases up is price. The Vector 100 is targetingand the Vector 100 Mini, which is geared for MicroATX, is targeting. Lian Li Wireless FansLian Li also showed off its new wireless fans, which comes with a battery pack. There’s currently no price on it, but it’s designed to allow its users to “flex,” as Chen put it. It comes with a built-in receiver. The fans and RGB LEDs use up to 12 volts. In terms of battery life, the CEO says that 3 fans with their LEDs on will last for about 20 minutes. Hydroshift 2 Liquid Cooler Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!The Hydroshift 2 Liquid Cooler has a click actuation ring around the cooler, which can be used as a software-less switch for the display and all of that is pre-written to the device. This means that toggling it doesn’t require software, though you could use software. Compared to Lian Li’s previous Hydroshift 1, the radiator size has been reduced to offer more compatibility but Lian Li says it’s tried to improve flow within the cooler. The company also pushed the micro fins closer to the heat source. #lian #lancool #has #fans #glass
    GAMERSNEXUS.NET
    Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and $45 Case, ft. CEO
    Lian Li Lancool 4 Has Fans in Glass, 217 Infinity, DAN B4, and $45 Case, ft. CEOMay 29, 2025Last Updated: 2025-05-29During Computex 2025, Lian Li showed off several new cases that include the Lancool 4, Lancool 217 Infinity, Lian Li O11 Mini V2, and moreThe HighlightsLian Li's Lancool 4 case has gigantic holes cut into the glass for intake fans, coupling airflow with glassThe company’s Shifting Block PSU has a rotating plug that is geared for back-connect motherboardsThe company’s Vector 100 cases are very cheap, starting at $45 without fansTable of ContentsAutoTOC Buy a GN 4-Pack of PC-themed 3D Coasters! These high-quality, durable, flexible coasters ship in a pack of 4, each with a fully custom design made by GN's team. You'll get a motherboard-themed coaster with debug display & reset buttons, a SATA SSD with to-scale connectors, RAM sticks, and a GN logo. These fund our web work! Buy here.IntroWe visited Lian Li during Computex, where the company showed off several of its upcoming products. We think the most interesting one is the Lancool 4, which has fans built into its glass front panel. It’s supposed to be a $130 case that will come with 6 fans.Editor's note: This was originally published on May 19, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangLancool 4The big thing about the Lancool 4 is that it embeds its fans into the front glass panel. This kind of takes us back to about 20 years ago, but instead of glass, the fans were integrated into acrylic and people would take a hole saw and would mount their own fans into it. One of the challenges with this design pertains to potentially reducing the yields with glass breaking being a concern. This wasn’t something that case companies did before, but Lian Li CEO Jameson Chen tells us the glass manufacturing process has improved dramatically lately. The CEO says that the failure rate used to be abysmal but has gotten down to about 5% to accommodate for the curve of the glass. Drilling holes into the glass brings the failure rate down at least another 5%. To mitigate failure rates, Lian Li found that there needs to be at least a 3cm gap between the holes. Chen revealed that the glass is 4mm thick, which is to bolster its quality.  In between the fans are plastic pieces which are used to hide the cables. The fans also use Pogo pins, which are integrated into the bottom of the front panel. When we asked Chen what happens if one of the fans dies, the CEO stated that Lian Li would provide a 5-year warranty. He elaborated that the fans are a new design and that they are 10% fiberglass PBT. Chen also revealed that the fans use fluid dynamic bearings (FDB). Considering Lian Li is still prototyping the case, the company is still thinking about whether to put RGB LEDs on the fan blades or to put the RGB LEDs around the fan’s frames. The Lancool 4 has an aluminum top and the rest of the chassis is made of steel, which is 1mm thick.The back glass panel releases via a button. Chen says this was done so that people could open up the glass panel without opening up the bottom side panel. Looking at the design of the rest of the case, it has a lot of similar panels as seen in the Lancool series. It’s got 4 doors and the 2 on the bottom sides are ventilated mesh and there are fan-mount options on the side. While we were there, Chen told us that Lian Li is considering shortening the case from the front to the back a little bit. This would bring the fans in closer to the components. This will benefit an air cooler and GPU. In our experience, performance in shorter cases, in a like-for-like scenario, is better. Chen also thinks the aesthetics of the case would improve as well with a tighter design. The downside is that the case would no longer support 420mm radiators and would support 360mm radiators max. The back panel of the Lancool 4 uses glass, which would normally expose the cable management but the case will come with a cable cover. There would be 2 screws to remove it. A downside here is that there’s less cable-management space to work with.The Lancool 4’s PSU mount is towards the back and bottom of the case. The bottom front has a cut out, which provides some space to route cables. Shifting Block PSU Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.The company also showed off a new interesting power supply, which has a rotating plug. This creates a shifting layout for the cable connections and allows users to re-orient the PSU. Chen tells us it's designed for top and bottom chamber cases and it’s also geared for back-connect motherboards.  Looking at the PSU, it has its 24-pin connectors off on one side. It also has an optional fan and USB 2.0 hub.Lian Li O11 Mini V2Moving on to the Lian Li O11 Mini V2, it has mesh on one of the side panels that’s popped-out about 3mm, which is to accommodate for ATX PSUs that protrude past the frame of the case. The company designed it this way because it had a very specific width it wanted to tackle to avoid the case looking too chunky. Currently, the volume of the case is 45 liters, which includes the feet, but does not include the protruding mesh side panel. The case we saw used bottom intake fans, which are slanted at 25 degrees and the only place for air access is underneath the back panel side. This is coupled with a tiny dust filter on the bottom, which slides out through the back. In terms of other fan mounts, the case has 2 on the side, 1 on the back, and 3 fans can fit in the top. The Lian Li O11 Mini V2 is targeting $89 without fans and $99 with five 120mm fans (2 on the side and 3 on the bottom). Dan Case B4Moving on to Lian Li’s Dan Case B4, we’ve reviewed Dan cases before. The unit we saw at Computex isn’t done yet. We’re told it’s about 60% completed. The case can rotate and has feet and an extension that allows the case to support up to a 360mm radiator. The downside is that about 30% of one of the radiator’s fans would be obstructed by a metal wall. It’s possible that they may perforate this wall to help with cooling. Lian Li is planning to put some mesh or covering on the front panel of the case. The unit we saw was fully exposed and open. What’s interesting about this layout is that the GPU fans are right up against the case’s front intake fans, which is going to be about as cool as you can get for the video card. Most GPUs these days have vertically-oriented fins where the air is going to come out the sides. In this case, air should come out through the punctured side panel but may re-circulate into the back radiator, especially if its fans are intake. If the fans are oriented to be exhaust, that might work better in this case. Lian Li is planning to provide 2x120mm fans along with the case. The case can also be rotated to look like the image above. 217 Infinity CaseLian Li also showed off its 217 Infinity case, which is the 217 case with an updated front and leans on some of the changes that the Lancool 4 has made to get its fans into its front glass panel. The tooling is mostly the same. The things in the back of the case are all basically identical. The changes pertain to the front panel, which have some giant holes in them to accommodate 170mm fans that are 30mm deep. The glass panel has the infinity mirror styling. The only other major change pertains to the IO. Some people complained that the original 217 had its IO on the bottom side, so now the company has moved it to the top with an option to have it on the bottom side. The case comes with 2x170mm front fans and a rear fan. The black version of the case is targeting $120 with a white version targeting $125. Lian Li Vector SeriesAnother Lian Li case we looked at had some “functional gimmicks.” On the back side, it has a cut-out area that looks like a handle, but definitely isn’t. Instead, there’s a very fine mesh filter that’s an area that’s meant to help with intake. This should also help with GPU cooling. The case is targeted at the system integrator market, but will still be sold at retail. Lian Li is targeting $110 for it without any fans, but includes an 8.8-inch IPS screen that carries a 1720x4080 resolution. Pricing may change in the US based on tariffs. Vector 100 and Vector 100 MiniThe main reason we’re bringing these 2 cases up is price. The Vector 100 is targeting $60 (without fans) and the Vector 100 Mini, which is geared for MicroATX, is targeting $45 (without fans). Lian Li Wireless FansLian Li also showed off its new wireless fans, which comes with a battery pack. There’s currently no price on it, but it’s designed to allow its users to “flex,” as Chen put it. It comes with a built-in receiver. The fans and RGB LEDs use up to 12 volts. In terms of battery life, the CEO says that 3 fans with their LEDs on will last for about 20 minutes. Hydroshift 2 Liquid Cooler Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)The Hydroshift 2 Liquid Cooler has a click actuation ring around the cooler, which can be used as a software-less switch for the display and all of that is pre-written to the device. This means that toggling it doesn’t require software, though you could use software. Compared to Lian Li’s previous Hydroshift 1, the radiator size has been reduced to offer more compatibility but Lian Li says it’s tried to improve flow within the cooler. The company also pushed the micro fins closer to the heat source.
    0 Comentários 0 Compartilhamentos
  • World record: 1 million GB per sec internet speed achieved by Japan over 1,100 miles

    Imagine downloading 10,000 4K movies in just a second. A team of Japanese researchers has achieved such a mind-blowing internet speed using a specially designed optical fiber that’s no thicker than what we use today.
    The researchers set a new world record, transmitting 1.02 petabitsof data per second over a distance of 1,808 kilometersusing their special coupled 19-core optical fiber. However, this achievement isn’t just about faster internet. 
    In their new study, the researchers claim that their newly developed optical-fiber technology can help us prepare our networks for a future where data traffic will skyrocket, thanks to AI, 6G, the Internet of Things, and beyond.
    The science of insane internet speed
    For years, scientists have tried to increase the amount of data that can travel through optical fibers. While they’ve managed to send petabits per second before, they could only do it over short distances. 
    Long-distance transmission has always been challenging. That’s because the signal weakens as it travels, and amplifying it across many fiber cores without creating interference is a major technical challenge. The study authors tackled the problem by designing a special type of optical fiber—a 19-core fiber. 
    Think of it like replacing a single-lane road with a 19-lane superhighway, all bundled into a fiber just 0.125 mm thick, the same size as those used in existing infrastructure. Each core carried data independently, and together they allowed a huge amount of information to move simultaneously.
    The researchers also developed a smart amplification system. Optical signals lose strength as they move along the fiber, so amplifiers are used to boost them. However, there’s one catch:  each core had to be amplified at the same time, and across two different bands of light. 
    The team built a system that used a combination of special amplifiers to do this in all 19 cores without mixing up the signals. They set up 19 recirculating loops, each using one core of the fiber, and passed the signals through them 21 times to simulate a total distance of 1,808 kilometers. 
    At the end of the journey, a 19-channel receiver caught the signals, and a multi-input multi-output-based digital processor cleaned them up, removing interference and calculating the data rate. 
    The result was astonishing. A total capacity of 1.02 petabits per second over 1,808 km was achieved, setting a new world record for optical fiber communication using standard-sized fibers. Even more impressive, the capacity-distance product, a key measure of fiber performance, reached 1.86 exabits per second-km, the highest ever recorded.
    A powerful and practical fiber technology
    A table comparing the performance of different fiber-optic cables. Source: NICT
    This isn’t the first time a 19-core optical fiber has been put to the test. “The transmission over an earlier generation of 19-core coupled-core fiber was limited to 1.7 petabits per second over a relatively short distance of 63.5 km,” the study authors added.
    However, this is indeed the first time that this revolutionary technology has broken the distance limits by carrying data over 1,800 km. This success could completely reshape how we build the internet of tomorrow. 
    As the world moves into the post-5G era, with self-driving cars, AI assistants, real-time VR, and billions of connected devices, we’ll need massive data highways to keep everything running. 
    “In the post-5G society, the volume of data traffic is expected to increase explosively due to new communication services, and the realization of advanced information and communication infrastructure is required,” the study authors added.RECOMMENDED ARTICLES
    This research shows that it’s possible to build ultra-high-speed, long-distance fiber networks without changing the size of existing infrastructure, which makes real-world deployment much easier. However, when this new optical fiber technology will actually roll out remains to be seen.
    The study was presented at the 48th Optical Fiber Communication Conference.
    #world #record #million #per #sec
    World record: 1 million GB per sec internet speed achieved by Japan over 1,100 miles
    Imagine downloading 10,000 4K movies in just a second. A team of Japanese researchers has achieved such a mind-blowing internet speed using a specially designed optical fiber that’s no thicker than what we use today. The researchers set a new world record, transmitting 1.02 petabitsof data per second over a distance of 1,808 kilometersusing their special coupled 19-core optical fiber. However, this achievement isn’t just about faster internet.  In their new study, the researchers claim that their newly developed optical-fiber technology can help us prepare our networks for a future where data traffic will skyrocket, thanks to AI, 6G, the Internet of Things, and beyond. The science of insane internet speed For years, scientists have tried to increase the amount of data that can travel through optical fibers. While they’ve managed to send petabits per second before, they could only do it over short distances.  Long-distance transmission has always been challenging. That’s because the signal weakens as it travels, and amplifying it across many fiber cores without creating interference is a major technical challenge. The study authors tackled the problem by designing a special type of optical fiber—a 19-core fiber.  Think of it like replacing a single-lane road with a 19-lane superhighway, all bundled into a fiber just 0.125 mm thick, the same size as those used in existing infrastructure. Each core carried data independently, and together they allowed a huge amount of information to move simultaneously. The researchers also developed a smart amplification system. Optical signals lose strength as they move along the fiber, so amplifiers are used to boost them. However, there’s one catch:  each core had to be amplified at the same time, and across two different bands of light.  The team built a system that used a combination of special amplifiers to do this in all 19 cores without mixing up the signals. They set up 19 recirculating loops, each using one core of the fiber, and passed the signals through them 21 times to simulate a total distance of 1,808 kilometers.  At the end of the journey, a 19-channel receiver caught the signals, and a multi-input multi-output-based digital processor cleaned them up, removing interference and calculating the data rate.  The result was astonishing. A total capacity of 1.02 petabits per second over 1,808 km was achieved, setting a new world record for optical fiber communication using standard-sized fibers. Even more impressive, the capacity-distance product, a key measure of fiber performance, reached 1.86 exabits per second-km, the highest ever recorded. A powerful and practical fiber technology A table comparing the performance of different fiber-optic cables. Source: NICT This isn’t the first time a 19-core optical fiber has been put to the test. “The transmission over an earlier generation of 19-core coupled-core fiber was limited to 1.7 petabits per second over a relatively short distance of 63.5 km,” the study authors added. However, this is indeed the first time that this revolutionary technology has broken the distance limits by carrying data over 1,800 km. This success could completely reshape how we build the internet of tomorrow.  As the world moves into the post-5G era, with self-driving cars, AI assistants, real-time VR, and billions of connected devices, we’ll need massive data highways to keep everything running.  “In the post-5G society, the volume of data traffic is expected to increase explosively due to new communication services, and the realization of advanced information and communication infrastructure is required,” the study authors added.RECOMMENDED ARTICLES This research shows that it’s possible to build ultra-high-speed, long-distance fiber networks without changing the size of existing infrastructure, which makes real-world deployment much easier. However, when this new optical fiber technology will actually roll out remains to be seen. The study was presented at the 48th Optical Fiber Communication Conference. #world #record #million #per #sec
    INTERESTINGENGINEERING.COM
    World record: 1 million GB per sec internet speed achieved by Japan over 1,100 miles
    Imagine downloading 10,000 4K movies in just a second. A team of Japanese researchers has achieved such a mind-blowing internet speed using a specially designed optical fiber that’s no thicker than what we use today. The researchers set a new world record, transmitting 1.02 petabits (1.02 x 106 GB) of data per second over a distance of 1,808 kilometers (~1,118 miles) using their special coupled 19-core optical fiber. However, this achievement isn’t just about faster internet.  In their new study, the researchers claim that their newly developed optical-fiber technology can help us prepare our networks for a future where data traffic will skyrocket, thanks to AI, 6G, the Internet of Things, and beyond. The science of insane internet speed For years, scientists have tried to increase the amount of data that can travel through optical fibers. While they’ve managed to send petabits per second before, they could only do it over short distances (less than 1,000 km or 621 miles).  Long-distance transmission has always been challenging. That’s because the signal weakens as it travels, and amplifying it across many fiber cores without creating interference is a major technical challenge. The study authors tackled the problem by designing a special type of optical fiber—a 19-core fiber.  Think of it like replacing a single-lane road with a 19-lane superhighway, all bundled into a fiber just 0.125 mm thick, the same size as those used in existing infrastructure. Each core carried data independently, and together they allowed a huge amount of information to move simultaneously. The researchers also developed a smart amplification system. Optical signals lose strength as they move along the fiber, so amplifiers are used to boost them. However, there’s one catch:  each core had to be amplified at the same time, and across two different bands of light (C-band and L-band).  The team built a system that used a combination of special amplifiers to do this in all 19 cores without mixing up the signals. They set up 19 recirculating loops, each using one core of the fiber, and passed the signals through them 21 times to simulate a total distance of 1,808 kilometers.  At the end of the journey, a 19-channel receiver caught the signals, and a multi-input multi-output (MIMO)-based digital processor cleaned them up, removing interference and calculating the data rate.  The result was astonishing. A total capacity of 1.02 petabits per second over 1,808 km was achieved, setting a new world record for optical fiber communication using standard-sized fibers. Even more impressive, the capacity-distance product, a key measure of fiber performance, reached 1.86 exabits per second-km, the highest ever recorded. A powerful and practical fiber technology A table comparing the performance of different fiber-optic cables. Source: NICT This isn’t the first time a 19-core optical fiber has been put to the test. “The transmission over an earlier generation of 19-core coupled-core fiber was limited to 1.7 petabits per second over a relatively short distance of 63.5 km,” the study authors added. However, this is indeed the first time that this revolutionary technology has broken the distance limits by carrying data over 1,800 km. This success could completely reshape how we build the internet of tomorrow.  As the world moves into the post-5G era, with self-driving cars, AI assistants, real-time VR, and billions of connected devices, we’ll need massive data highways to keep everything running.  “In the post-5G society, the volume of data traffic is expected to increase explosively due to new communication services, and the realization of advanced information and communication infrastructure is required,” the study authors added.RECOMMENDED ARTICLES This research shows that it’s possible to build ultra-high-speed, long-distance fiber networks without changing the size of existing infrastructure, which makes real-world deployment much easier. However, when this new optical fiber technology will actually roll out remains to be seen. The study was presented at the 48th Optical Fiber Communication Conference (OFC 2025).
    0 Comentários 0 Compartilhamentos
  • The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla

    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs, detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article
    #clock #ticking #elon #musk039s #hail
    The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla
    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs, detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article #clock #ticking #elon #musk039s #hail
    FUTURISM.COM
    The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla
    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs (and in turn failed to stop at the parked bus), detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article
    0 Comentários 0 Compartilhamentos