• SEGA Gives A Behind-The-Scenes Look At Its New London Office

    Ping pong! Pool table! Boardroom!Sega has posted a video showcasing its brand-new London office in Chiswick Business Park, and cor blimey, it looks really nice.The roughly one-minute long video takes us through the reception areabefore swooping through what looks like a recreation/dining area, a few meeting rooms, the main boardroom, and a digital gallery area.Read the full article on nintendolife.com
    #sega #gives #behindthescenes #look #its
    SEGA Gives A Behind-The-Scenes Look At Its New London Office
    Ping pong! Pool table! Boardroom!Sega has posted a video showcasing its brand-new London office in Chiswick Business Park, and cor blimey, it looks really nice.The roughly one-minute long video takes us through the reception areabefore swooping through what looks like a recreation/dining area, a few meeting rooms, the main boardroom, and a digital gallery area.Read the full article on nintendolife.com #sega #gives #behindthescenes #look #its
    SEGA Gives A Behind-The-Scenes Look At Its New London Office
    www.nintendolife.com
    Ping pong! Pool table! Boardroom!Sega has posted a video showcasing its brand-new London office in Chiswick Business Park, and cor blimey, it looks really nice.The roughly one-minute long video takes us through the reception area (with a lovely-looking main desk, we might add) before swooping through what looks like a recreation/dining area, a few meeting rooms, the main boardroom, and a digital gallery area.Read the full article on nintendolife.com
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • From LLMs to hallucinations, here’s a simple guide to common AI terms

    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
    We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks.

    AGI
    Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research.
    AI agent
    An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks.
    Chain of thought
    Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer.
    In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Deep learning
    A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.
    Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion
    Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.
    Distillation
    Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior.
    Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4.
    While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants.
    Fine-tuning
    This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata. 
    Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN
    A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time. 
    The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI.
    Hallucination
    Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. 
    Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button.
    The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God. 
    Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks.
    Inference
    Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data.
    Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.
    AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.
    LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.
    These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network
    A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. 
    Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training
    Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs.
    Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand.
    It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems.
    Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards.
    Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning
    A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. 
    Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights
    Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output. 
    Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.
    For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. 
    Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset.

    Topics
    #llms #hallucinations #heres #simple #guide
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer. In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God.  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics #llms #hallucinations #heres #simple #guide
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    techcrunch.com
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See: Large language model) Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See: Neural network) Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See: Large language model [LLM]) GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition (hence “adversarial”) – with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God (yet).  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See: Training] Large language model (LLM) Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat. (See: Neural network) Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See: Large language model [LLM]) Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch. [See: Inference] Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See: Fine tuning) Weights Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Dividing Cell Grid – Houdini Version

    Dividing Cell Grid – Houdini Version

    In the first episodes of our Blueprints201 course over on our Patreon, we quickly showed a Houdini version of the Cellgrid project we built in this course.
    Today this Houdini Setup finally gets it’s own tutorial.

    Liked it? Take a second to support Christopher Kopic on Patreon!
    #dividing #cell #grid #houdini #version
    Dividing Cell Grid – Houdini Version
    Dividing Cell Grid – Houdini Version In the first episodes of our Blueprints201 course over on our Patreon, we quickly showed a Houdini version of the Cellgrid project we built in this course. Today this Houdini Setup finally gets it’s own tutorial. Liked it? Take a second to support Christopher Kopic on Patreon! #dividing #cell #grid #houdini #version
    Dividing Cell Grid – Houdini Version
    entagma.com
    Dividing Cell Grid – Houdini Version In the first episodes of our Blueprints201 course over on our Patreon, we quickly showed a Houdini version of the Cellgrid project we built in this course. Today this Houdini Setup finally gets it’s own tutorial. Liked it? Take a second to support Christopher Kopic on Patreon!
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Stylized VFX Packs on Unity Assetstore

    Hello! here are some Stylized VFX Asset Packs for Unity Assetstore:)
    if you like my style, have a look!
    Hun0FX - Asset StoreStylizedVFX Slash1
    StylizedVFX Blast vol1
    StylizedVFX AOE vol1
    StylizedVFX Hit vol 1
    StylizedVFX Tornado1

    1 post - 1 participant
    Read full topic
    #stylized #vfx #packs #unity #assetstore
    Stylized VFX Packs on Unity Assetstore
    Hello! here are some Stylized VFX Asset Packs for Unity Assetstore:) if you like my style, have a look! Hun0FX - Asset StoreStylizedVFX Slash1 StylizedVFX Blast vol1 StylizedVFX AOE vol1 StylizedVFX Hit vol 1 StylizedVFX Tornado1 1 post - 1 participant Read full topic #stylized #vfx #packs #unity #assetstore
    Stylized VFX Packs on Unity Assetstore
    realtimevfx.com
    Hello! here are some Stylized VFX Asset Packs for Unity Assetstore:) if you like my style, have a look! Hun0FX - Asset StoreStylizedVFX Slash1 StylizedVFX Blast vol1 StylizedVFX AOE vol1 StylizedVFX Hit vol 1 StylizedVFX Tornado1 1 post - 1 participant Read full topic
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • OpenAI’s $6.5B new acquisition signals Apple’s biggest AI crisis yet

    Tech OpenAI’s B new acquisition signals Apple’s biggest AI crisis yet OpenAI, Jony Ive join forces to challenge Apple’s AI future
    Published
    May 26, 2025 8:00am EDT close OpenAI chief urges US to maintain 'lead' in AI developments: 'Critically important' OpenAI CEO Sam Altman sits down with Shannon Bream to discuss the positives and potential negatives of artificial intelligence and the importance of maintaining a lead in the AI industry over China. OpenAI has just made a move that's turning heads across the tech world. The company is acquiring io, the AI device startup founded by Jony Ive, for nearly billion. This isn't your typical business deal. It's a collaboration between Sam Altman, who leads OpenAI, and the designer responsible for some of Apple's most iconic products, including the iPhone and Apple Watch. Together, they want to create a new generation of AI-powered devices that could completely change how we use technology. OpenAI’s ChatGPT on a smartphoneWhy this deal mattersThis deal is significant for a few reasons. Jony Ive is stepping into a major creative and design role at OpenAI, bringing along his team of engineers and designers, many of whom also have Apple roots. Their mission is to build hardware that goes beyond the familiar territory of smartphones and laptops. The first product from this team is expected in 2026, and while details are still scarce, it's rumored to be a "screenless" AI companion. The idea is to develop something that's aware of its surroundings and designed to help users in ways that current devices simply can't.Apple faces a new kind of competitionApple, which has long been seen as the leader in design and innovation, suddenly finds itself in a tough spot. The company has struggled to keep up with the rapid advancements in AI, and now OpenAI is moving directly into its territory. Investors are clearly worried, as Apple's stock dropped after the news broke. Unlike previous competitors such as Google, which tried to beat Apple at its own game, OpenAI and Ive are taking a different approach. They're aiming to create a device that could make the iPhone feel outdated by focusing on AI-first experiences and moving away from traditional screens. Apple logoWhat will the new device be like?So what will this new device actually look like? While Altman and Ive are keeping most details secret, they have hinted at a family of AI devices that focus on seamless, intuitive interaction rather than screens. They want to create something that understands your context, adapts to your needs and helps you connect and create in new ways, all without requiring you to stare at a display. The device won't be a phone or a pair of glasses but something entirely new that fits into your life as naturally as a MacBook or iPhone once did. OpenAI's ambition is huge. In fact, they want to ship 100 million units faster than any company has ever done with a new product, which shows just how big their vision is.What's next for OpenAI and Apple?For OpenAI, this is the largest acquisition it has ever made and marks a serious push into consumer hardware. With Jony Ive leading design, OpenAI is betting that it can outpace Apple and define the next era of personal technology. Meanwhile, Apple is under more pressure than ever to deliver on its own AI promises and to innovate beyond the incremental updates we've seen in recent years. The competition is no longer just about who makes the best phone. Now, it's about who can redefine the relationship between people and technology in the age of AI. Artificial intelligenceKurt's key takeawaysIt's impressive to see two visionaries like Sam Altman and Jony Ive working together on something this ambitious. If their AI devices live up to expectations, we could be on the verge of a major shift in how we use and think about technology. Apple finally has a real challenger, and the next few years are sure to be interesting for anyone following the future of tech.Do you believe Apple can regain its edge in innovation, or is the future of personal tech now in the hands of new players like OpenAI? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #openais #65b #new #acquisition #signals
    OpenAI’s $6.5B new acquisition signals Apple’s biggest AI crisis yet
    Tech OpenAI’s B new acquisition signals Apple’s biggest AI crisis yet OpenAI, Jony Ive join forces to challenge Apple’s AI future Published May 26, 2025 8:00am EDT close OpenAI chief urges US to maintain 'lead' in AI developments: 'Critically important' OpenAI CEO Sam Altman sits down with Shannon Bream to discuss the positives and potential negatives of artificial intelligence and the importance of maintaining a lead in the AI industry over China. OpenAI has just made a move that's turning heads across the tech world. The company is acquiring io, the AI device startup founded by Jony Ive, for nearly billion. This isn't your typical business deal. It's a collaboration between Sam Altman, who leads OpenAI, and the designer responsible for some of Apple's most iconic products, including the iPhone and Apple Watch. Together, they want to create a new generation of AI-powered devices that could completely change how we use technology. OpenAI’s ChatGPT on a smartphoneWhy this deal mattersThis deal is significant for a few reasons. Jony Ive is stepping into a major creative and design role at OpenAI, bringing along his team of engineers and designers, many of whom also have Apple roots. Their mission is to build hardware that goes beyond the familiar territory of smartphones and laptops. The first product from this team is expected in 2026, and while details are still scarce, it's rumored to be a "screenless" AI companion. The idea is to develop something that's aware of its surroundings and designed to help users in ways that current devices simply can't.Apple faces a new kind of competitionApple, which has long been seen as the leader in design and innovation, suddenly finds itself in a tough spot. The company has struggled to keep up with the rapid advancements in AI, and now OpenAI is moving directly into its territory. Investors are clearly worried, as Apple's stock dropped after the news broke. Unlike previous competitors such as Google, which tried to beat Apple at its own game, OpenAI and Ive are taking a different approach. They're aiming to create a device that could make the iPhone feel outdated by focusing on AI-first experiences and moving away from traditional screens. Apple logoWhat will the new device be like?So what will this new device actually look like? While Altman and Ive are keeping most details secret, they have hinted at a family of AI devices that focus on seamless, intuitive interaction rather than screens. They want to create something that understands your context, adapts to your needs and helps you connect and create in new ways, all without requiring you to stare at a display. The device won't be a phone or a pair of glasses but something entirely new that fits into your life as naturally as a MacBook or iPhone once did. OpenAI's ambition is huge. In fact, they want to ship 100 million units faster than any company has ever done with a new product, which shows just how big their vision is.What's next for OpenAI and Apple?For OpenAI, this is the largest acquisition it has ever made and marks a serious push into consumer hardware. With Jony Ive leading design, OpenAI is betting that it can outpace Apple and define the next era of personal technology. Meanwhile, Apple is under more pressure than ever to deliver on its own AI promises and to innovate beyond the incremental updates we've seen in recent years. The competition is no longer just about who makes the best phone. Now, it's about who can redefine the relationship between people and technology in the age of AI. Artificial intelligenceKurt's key takeawaysIt's impressive to see two visionaries like Sam Altman and Jony Ive working together on something this ambitious. If their AI devices live up to expectations, we could be on the verge of a major shift in how we use and think about technology. Apple finally has a real challenger, and the next few years are sure to be interesting for anyone following the future of tech.Do you believe Apple can regain its edge in innovation, or is the future of personal tech now in the hands of new players like OpenAI? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #openais #65b #new #acquisition #signals
    OpenAI’s $6.5B new acquisition signals Apple’s biggest AI crisis yet
    www.foxnews.com
    Tech OpenAI’s $6.5B new acquisition signals Apple’s biggest AI crisis yet OpenAI, Jony Ive join forces to challenge Apple’s AI future Published May 26, 2025 8:00am EDT close OpenAI chief urges US to maintain 'lead' in AI developments: 'Critically important' OpenAI CEO Sam Altman sits down with Shannon Bream to discuss the positives and potential negatives of artificial intelligence and the importance of maintaining a lead in the AI industry over China. OpenAI has just made a move that's turning heads across the tech world. The company is acquiring io, the AI device startup founded by Jony Ive, for nearly $6.5 billion. This isn't your typical business deal. It's a collaboration between Sam Altman, who leads OpenAI, and the designer responsible for some of Apple's most iconic products, including the iPhone and Apple Watch. Together, they want to create a new generation of AI-powered devices that could completely change how we use technology. OpenAI’s ChatGPT on a smartphone (Kurt "CyberGuy" Knutsson)Why this deal mattersThis deal is significant for a few reasons. Jony Ive is stepping into a major creative and design role at OpenAI, bringing along his team of engineers and designers, many of whom also have Apple roots. Their mission is to build hardware that goes beyond the familiar territory of smartphones and laptops. The first product from this team is expected in 2026, and while details are still scarce, it's rumored to be a "screenless" AI companion. The idea is to develop something that's aware of its surroundings and designed to help users in ways that current devices simply can't.Apple faces a new kind of competitionApple, which has long been seen as the leader in design and innovation, suddenly finds itself in a tough spot. The company has struggled to keep up with the rapid advancements in AI, and now OpenAI is moving directly into its territory. Investors are clearly worried, as Apple's stock dropped after the news broke. Unlike previous competitors such as Google, which tried to beat Apple at its own game, OpenAI and Ive are taking a different approach. They're aiming to create a device that could make the iPhone feel outdated by focusing on AI-first experiences and moving away from traditional screens. Apple logo (Kurt "CyberGuy" Knutsson)What will the new device be like?So what will this new device actually look like? While Altman and Ive are keeping most details secret, they have hinted at a family of AI devices that focus on seamless, intuitive interaction rather than screens. They want to create something that understands your context, adapts to your needs and helps you connect and create in new ways, all without requiring you to stare at a display. The device won't be a phone or a pair of glasses but something entirely new that fits into your life as naturally as a MacBook or iPhone once did. OpenAI's ambition is huge. In fact, they want to ship 100 million units faster than any company has ever done with a new product, which shows just how big their vision is.What's next for OpenAI and Apple?For OpenAI, this is the largest acquisition it has ever made and marks a serious push into consumer hardware. With Jony Ive leading design, OpenAI is betting that it can outpace Apple and define the next era of personal technology. Meanwhile, Apple is under more pressure than ever to deliver on its own AI promises and to innovate beyond the incremental updates we've seen in recent years. The competition is no longer just about who makes the best phone. Now, it's about who can redefine the relationship between people and technology in the age of AI. Artificial intelligence (Kurt "CyberGuy" Knutsson)Kurt's key takeawaysIt's impressive to see two visionaries like Sam Altman and Jony Ive working together on something this ambitious. If their AI devices live up to expectations, we could be on the verge of a major shift in how we use and think about technology. Apple finally has a real challenger, and the next few years are sure to be interesting for anyone following the future of tech.Do you believe Apple can regain its edge in innovation, or is the future of personal tech now in the hands of new players like OpenAI? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • AI for network admins

    There are few industries these days that are not touched by artificial intelligence. Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI.
    Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.” 
    In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI.

    It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further.
    Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AIis here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing. 
    “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said.
    “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.” 
    Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence.
    This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service.
    The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation.
    But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures. 

    From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play.
    At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two. 
    “AI for networking is where we spend time from an engineering and data science point of view. It’s really abouthow we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’rethe same number of people,” he observed. 

    What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process

    Bastien Aerni, GTT

    “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPUclusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.” 
    And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content.
    This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs. 
    “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IPbackbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.  
    “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area networkservices built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.” 

    At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache. 
    Speaking at Mobile World Congress 2025, BT Business chief technology officerColin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”. 
    Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable.
    Looking at the challenges ahead and how the new network will resolve them, he said: “just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that canburst, and that is programmable, and that you canbandwidth on demand as well. All of this programmabilityhave never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.” 
    The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.  
    He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks.
    He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic. 
    But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language modelsto application/industry-dedicated small language models.

    articles about AI for networking

    How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change.
    Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence.

    NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases.
    “literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.  
    “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out morewhat’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from aand do all of the edits on-site, or remotely so not to have to deployto every single location, then you need a different backbone, too.” 
    Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.” 
    But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure. 

    Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.  
    Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects. 
    Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix.
    “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says. 
    Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services.
    Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise. 

    Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligenceas now fundamental to its design and manufacturing work in virtually all production industries.
    Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.  
    “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection,the generativeeconomy.”
    Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads. 
    Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation. 
    “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.”
    This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models.
    “Right now, there is this rush to create larger and more comprehensive models. However, it maybe a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.” 
    #network #admins
    AI for network admins
    There are few industries these days that are not touched by artificial intelligence. Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI. Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.”  In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI. It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further. Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AIis here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing.  “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said. “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.”  Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence. This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service. The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation. But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures.  From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play. At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two.  “AI for networking is where we spend time from an engineering and data science point of view. It’s really abouthow we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’rethe same number of people,” he observed.  What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process Bastien Aerni, GTT “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPUclusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.”  And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content. This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs.  “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IPbackbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.   “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area networkservices built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.”  At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache.  Speaking at Mobile World Congress 2025, BT Business chief technology officerColin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”.  Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable. Looking at the challenges ahead and how the new network will resolve them, he said: “just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that canburst, and that is programmable, and that you canbandwidth on demand as well. All of this programmabilityhave never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.”  The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.   He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks. He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic.  But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language modelsto application/industry-dedicated small language models. articles about AI for networking How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change. Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence. NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases. “literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.   “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out morewhat’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from aand do all of the edits on-site, or remotely so not to have to deployto every single location, then you need a different backbone, too.”  Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.”  But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure.  Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.   Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects.  Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix. “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says.  Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services. Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise.  Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligenceas now fundamental to its design and manufacturing work in virtually all production industries. Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.   “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection,the generativeeconomy.” Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads.  Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation.  “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.” This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models. “Right now, there is this rush to create larger and more comprehensive models. However, it maybe a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.”  #network #admins
    AI for network admins
    www.computerweekly.com
    There are few industries these days that are not touched by artificial intelligence (AI). Networking is very much one that is touched. It is barely conceivable that any network of any reasonable size – from an office local area network or home router to a global telecoms infrastructure – could not “just” be improved by AI. Just take the words of Swisscom’s chief technical officer, Mark Düsener, about his company’s partnership with Cisco-owned Outshift to deploy agentic AI – of which more later – through his organisation. “The goal of getting into an agentic AI world, operating networks and connectivity is all about reducing the impact of service changes, reducing the risk of downtime and costs – therefore levelling up our customer experience.”  In other words, the implementation of AI results in operational efficiencies, increased reliability and user benefits. Seems simple, yes? But as we know, nothing in life is simple, and to guarantee such gains, AI can’t be “just” switched on. And perhaps most importantly, the benefits of AI in networking can’t be realised fully without considering networking for AI. It seems logical that any investigation of AI and networking – or indeed, AI and anything – should start with Nvidia, a company that has played a pivotal role in developing the AI tech ecosystem, and is set to do so further. Speaking in 2024 at a tech conference about how AI has established itself as an intrinsic part of business, Nvidia founder and CEO Jensen Huang observed that the era of generative AI (GenAI) is here and that enterprises must engage with “the single most consequential technology in history”. He told the audience that what was happening was the greatest fundamental computing platform transformation in 60 years, encompassing general-purpose computing to accelerated computing.  “We’re sitting on a mountain of data. All of us. We’ve been collecting it in our businesses for a long time. But until now, we haven’t had the ability to refine that, then discover insight and codify it automatically into our company’s natural experience, our digital intelligence. Every company is going to be an intelligence manufacturer. Every company is built on domain-specific intelligence. For the very first time, we can now digitise that intelligence and turn it into our AI – the corporate AI,” he said. “AI is a lifecycle that lives forever. What we are looking to do is turn our corporate intelligence into digital intelligence. Once we do that, we connect our data and our AI flywheel so that we collect more data, harvest more insight and create better intelligence. This allows us to provide better services or to be more productive, run faster, be more efficient and do things at a larger scale.”  Concluding his keynote, Huang stressed that enterprises must now engage with the “single most consequential technology in history” to translate and condense a company’s intelligence into digital intelligence. This is precisely what Swisscom is aiming to achieve. The company is Switzerland’s largest telecoms provider with more than six million mobile customers and 10,000 mobile antenna sites that have to be managed effectively. When its network engineers make changes to the infrastructure, they face a common challenge: how to update systems that serve millions of customers without disrupting the service. The solution was partnering with Outshift to develop practical applications of AI agents in network operations to “redefine” customer experiences. That is, using Outshift’s Internet of Agents to deliver meaningful results for the telco, while also meeting customer needs through AI innovation. But these advantages are not the preserve of large enterprises such as telcos. Indeed, from a networking perspective, AI can enable small- and medium-sized businesses to gain access to enterprise-level technology that can allow them to focus on growth and eliminate the costs and infrastructure challenges that arise when managing complex IT infrastructures.  From a broader perspective, Swisscom and Outshift have also shown that making AI work effectively requires something new: an infrastructure that lets businesses communicate and work together securely. And this is where the two sides of AI and networking come into play. At the event where Nvidia’s Huang outlined his vision, David Hughes, chief product officer of HPE Aruba Networking, said there were pressing issues about the use of AI in enterprise networks, in particular around harnessing the benefits that GenAI can offer. Regarding “AI for networking” and “networking for AI”, Hughes suggested there are subtle but fundamental differences between the two.  “AI for networking is where we spend time from an engineering and data science point of view. It’s really about [questioning] how we use AI technology to turn IT admins into super-admins so that they can handle their escalating workloads independent of GenAI, which is kind of a load on top of everything else, such as escalating cyber threats and concerns about privacy. The business is asking IT to do new things, deploy new apps all the time, but they’re [asking this of] the same number of people,” he observed.  What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process Bastien Aerni, GTT “Networking for AI is about building out, first and foremost, the kind of switching infrastructure that’s needed to interconnect GPU [graphics processing unit] clusters. And then a little bit beyond that, thinking about the impact of collecting telemetry on a network and the changes in the way people might want to build out their network.”  And impact there is. A lot of firms currently investigating AI within their businesses find themselves asking how to manage the mass adoption of AI in relation to networking and data flows, such as the kind of bandwidth and capacity required to facilitate AI-generated output such as text, image and video content. This, says Bastien Aerni, vice-president of strategy and technology adoption at global networking and security-as-a-service firm GTT, is causing companies to rethink the speed and scale of their networking needs.  “To achieve the return on investment of AI initiatives, they have to be able to secure and process large amounts of data quickly, and to this end, their network architecture must be configured to support this kind of workload. Utilising a platform embedded in a Tier 1 IP [internet protocol] backbone here ensures low latency, high bandwidth and direct internet access globally,” he remarks.   “What we are starting to see, and expect more of, is AI computing increasingly taking place at the edge to eliminate the distance between the prompt and the process. Leveraging software-defined wide area network [SD-WAN] services built in the right platform to efficiently route AI data traffic can reduce latency and security risk, and provide more control over data.”  At the end of 2023, BT revealed that its networks had come under huge strain after the simultaneous online broadcast of six Premier League football matches and downloads of popular games, with the update of Call of Duty Modern Warfare particularly cited. AI promises to add to this headache.  Speaking at Mobile World Congress 2025, BT Business chief technology officer (CTO) Colin Bannon said that in the new, reshaped world of work, a robust and reliable network is a fundamental prerequisite for AI to work, and that it requires effort to stay relevant to meet ongoing challenges faced by the customers BT serves, mainly international business, governments and multinationals. The bottom line is that network performance to support the AI-enabled world is crucial in a world where “slow is the new down”.  Bannon added that Global Fabric, BT’s network-as-a-service product, was constructed before AI “blew up” and that BT was thinking of how to deal with a hyper-distributed set of workloads on a network and to be able to make it fully programmable. Looking at the challenges ahead and how the new network will resolve them, he said: “[AI] just makes distributed and more complex workflows even bigger, which makes the need for a fabric-type network even more important. You need a network that can [handle data] burst, and that is programmable, and that you can [control] bandwidth on demand as well. All of this programmability [is something businesses] have never had before. I would argue that the network is the computer, and the network is a prerequisite for AI to work.”  The result would be constructing enterprise networks that can cope with the massive strain placed on utilisation from AI, especially in terms of what is needed for training models. Bannon said there were three key network challenges and conditions to deal with AI: training requirements, inference requirements and general requirements.   He stated that the dynamic nature of AI workloads means networks need to be scalable and agile, with visibility tools that offer real-time monitoring, issue detection and troubleshooting. As regards specific training requirements, dealing with AI necessitates the movement of large datasets across the network, thus demanding high-bandwidth networks. He also described “elephant” flows of data – that is, continuous transmission over time and training over days. He warned that network inconsistencies could affect the accuracy and training time of AI models, and that tail latency could impact job completion time significantly. This means robust congestion management is needed to detect potential congestion and redistribute network traffic.  But AI training models generally spell network trouble. And now the conversation is turning from the use of generic large language models (see Preparing networks for Industry 5.0 box) to application/industry-dedicated small language models. Read more articles about AI for networking How network engineers can prepare for the future with AI: The rapid rise of AI has left some professionals feeling unprepared. GenAI is beneficial to networks, but engineers must have the proper tools to adapt to this new change. Cisco Live EMEA – network supplier tightens AI embrace: At its annual EMEA show, Cisco tech leaders unveiled a raft of new products, services and features designed to help customers do more with artificial intelligence. NTT Data has created and deployed a small language model called Tsuzumi, described as an ultra-lightweight model designed to reduce learning and inference costs. According to NTT’s UK and Ireland CTO, Tom Winstanley, the reason for developing this model has principally been to support edge use cases. “[That is] literally deployment at the edge of the network to avoid flooding of the network, also addressing privacy concerns, also addressing sustainability concerns around some of these very large language models being very specific in creating domain context,” he says.   “Examples of that can be used in video analytics, media analytics, and in capturing conversations in real time, but locally, and not deploying it out to flood the network. That said, the flip side of this was there was immense power sitting in some of these central hyper-scale models and capacities, and you also therefore need to find out more [about] what’s the right network background, and what’s the right balance of your network infrastructure. For example, if you want to do real-time media streaming from a [sports stadium] and do all of the edits on-site, or remotely so not to have to deploy [facilities] to every single location, then you need a different backbone, too.”  Winstanley notes that his company is part of a wider group that in media use cases could offer hyper-directional sound systems supported by AI. “This is looking like a really interesting area of technology that is relevant for supporter experience in a stadium – dampening, sound targeting. And then we’re back to the connection to the edge of the AI story. And that’s exciting for us. That is the frontier.”  But coming back from the frontier of technology to bread-and-butter business operations, even if the IT and comms community is confident that it can address any technological issues that arise regarding AI and networking, businesses themselves may not be so sure.  Research published by managed network-as-a-service provider Expereo in April 2025 revealed that despite 88% of UK business leaders regarding AI as becoming important to fulfilling business priorities in the next 12 months, there are a number of major roadblocks to AI plans by UK businesses. These include from employees and unreasonable demands, as well as poor existing infrastructure.   Worryingly, among the key findings of Expereo’s Enterprise horizons 2025 study was the general feeling from a lot of UK technology leaders that expectations within their organisation of what AI can do are growing faster than their ability to meet them. While 47% of UK organisations noted that their network/connectivity infrastructure was not ready to support new technology initiatives, such as AI, in general, a further 49% reported that their network performance was preventing or limiting their ability to support large data and AI projects.  Assessing the key trends revealed in the study, Expereo CEO Ben Elms says that as global businesses embrace AI to transform employee and customer experience, setting realistic goals and aligning expectations will be critical to ensuring that AI delivers long-term value, rather than being viewed as a quick fix. “While the potential of AI is immense, its successful integration requires careful planning. Technology leaders must recognise the need for robust networks and connectivity infrastructure to support AI at scale, while also ensuring consistent performance across these networks,” he says.  Summing up the state of the industry, Elms states that business is currently at a pivotal moment where strategic investments in technology and IT infrastructure are necessary to meet both current and future demands. In short, reflecting Düsener’s point about Swisscom’s aim to reduce the impact of service changes, reduce the risk of downtime and costs, and improve customer services. Just switching on any AI system and believing that any answer is “out there” just won’t do. Your network could very well tell you otherwise.  Through its core Catia platform and its SolidWorks subsidiary, engineering software company Dassault Systèmes sees artificial intelligence (AI) as now fundamental to its design and manufacturing work in virtually all production industries. Speaking to Computer Weekly in February 2025, the company’s senior vice-president, Gian Paolo Bassi, said the conversation of its sector has evolved from Industry 4.0, which was focused on automation, productivity and innovation without taking into account the effect of technological changes in society.   “The industry has decided that it’s time for an evolution,” he said. “It’s called Industry 5.0. At the intersection of the experience economy, there is a new, compelling necessity to be sustainable, to create a circular economy. So then, at the intersection, [we have] the generative [AI] economy.” Yet in aiming to generate gains in sustainability through Industry 5.0, there is a danger that the increased use of AI could potentially see increased power usage, as well as the need to invest in much more robust and responsive connected network infrastructure to support the rise in AI-based workloads.  Dassault first revealed it was working with generative AI design principles in 2024. As the practice has evolved, Bassi said it now captures two fundamental concepts. The first is the ability of AI to create new and original content based on language models that comprise details of processes, business models, designs of parts assemblies, specifications and manufacturing practices. These models, he stressed, would not be traditional, generic, compute-intensive models such as ChatGPT. Instead, they would be vertical, industry-specific, and trained on engineering content and technical documentation.  “We can now build large models of everything, which is a virtual twin, and we can get to a level of sophistication where new ideas can come in, be tested, and much more knowledge can be put into the innovation process. This is a tipping point,” he remarked. “It’s not a technological change. It’s a technological expansion – a very important one – because we are going to improve, to increase our portfolio with AI agents, with virtual companions and also content, because generative AI can generate content, and can generate, more importantly, know-how and knowledge that can be put to use by our customers immediately.” This tipping point means the software provider can bring knowledge and know-how to a new level because, in Bassi’s belief, this is what AI is best at: exploiting the large models of industrial practices. And with the most important benefit of addressing customer needs as the capabilities of AI are translated into the industrial world, offering a pathway for engineers to save precious time in research and spend more time on being creative in design, without massive, network-intensive models. “Right now, there is this rush to create larger and more comprehensive models. However, it may [just] be a temporary limitation of the technology,” Bassi suggested. “In fact, it is indeed possible that you don’t need the huge models to do specific tasks.” 
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • I tested the viral Roborock vacuum with a mechanical arm for a month - here's my verdict

    ZDNET's key takeaways The Roborock Saros Z70 is now available for purchase for The Saros Z70 is the first robot vacuum with a mechanical arm to lift lightweight objects and clean those missed areasThis robot vacuum performs impressively well, but you can expect some bugs with the OmniGrip mechanical arm function. View now at Roborock The Roborock Saros Z70 is currently on sale at Roborock for for Memorial Day, down from the usual price of I've spent the past few years of my life turning my home into the closest version of the Jetsons' house that I can get, bypassing the midcentury decor and flying cars. While I'm pleased to report that many of the predictions made by the 1960s sitcom have materialized over the decades, many remain unrealized. The biggest one? Rosie the Robot. Also: This midrange robot vacuum cleans as well as some flagship models - and it's 50% offThankfully, many companies are rallying behind the effort to create a household assistant robot. However, after being lucky enough to test the Roborock Saros Z70 with a mechanical arm, I believe Roborock has a definite edge on the competition. While other companies have created different kinds of household robots, the Saros Z70 is a multifunctional robot that could be a stepping stone to the future of smart homes. 
    details
    View at Us.roborock The Roborock Saros Z70 is a premium robot vacuum and mop with all the bells and whistles you'd expect from a flagship, plus a mechanical arm to pick up objects. And I can't dive into a review of this product without immediately focusing on this robotic grip. When the robot vacuum is cleaning, it detects small obstacles it can handle and picks them up. The robot then navigates to a predetermined area to drop off the item. Then, the device returns to the spot the object occupied and resumes cleaning the area.  Maria Diaz/ZDNETThe Saros Z70 comes with a Roborock bin that you can place in your home for your robot to drop soft items into. It's a rigid cardboard bin that looks like a small trash bin you'd see under a desk or in a bathroom. After your robot creates a virtual map of your home, you place the bin and you add it to the map in the Roborock mobile app. You can also add a larger area for your robot to drop off other items, like slippers and light shoes. The biggest question, of course, is: does the mechanical arm work as intended? After testing it in my home, I'm pleased to report that it does -- at least the vast majority of the time.Also: I invested in this 3-in-1 robot vacuum, and it's paying off for my homeTo test the OmniGrip mechanical arm, I set out ten obstacles around the house several times and ran full cleanings. I also did smaller area cleanings with fewer objects. The robot vacuum sees the object and gives a voice prompt to announce it's going to sort an item. It deploys the mechanical arm and lines itself up to pick up the item.  The Roborock Saros Z70's OmniGrip mechanical arm can be remotely controlled to pick up and drop off items at will. Maria Diaz/ZDNETOnce the arm grips the item, the robot travels to drop it off. It lines itself up with the bin or designated sorting area and releases the object, then retracts the arm. Also: My picks for the best robot vacuums for pet hair of 2025: Roomba, Eufy, Ecovacs, and moreIn my tests, the Roborock mechanical arm picked up the intended objects 83% of the time. This is a great number for a robot that is effectively introducing this type of technology to the market. It's also a great number when you consider that the robot's initial rollout has a very limited number of items it can recognize and pick up.  Maria Diaz/ZDNETRoborock says the Saros Z70 currently recognizes socks, sandals, crumpled tissues, and towels under 300g, and that new sortable objects will be added continuously via firmware updates. When I only used the recognizable objects, the robot gripped and relocated 90% of the items. When I added other small obstacles, like shoes, small cups, and plastic film, it gripped 75% of the objects.Also: This Ecovacs robot vacuum and mop is a sleeper hit, and it handles carpeting like a champAs a robot vacuum and mop, the Roborock Saros Z70's performance is outstanding -- I have zero qualms with it. It is one of the best robot vacuum and mop combos I've ever tested. It has the best obstacle avoidance feature I've seen thus far, so it doesn't get stuck on random objects, and it has an extendable mop pad to clean near edges. The robot also cleans quite thoroughly, much like the Saros 10 and Saros 10R, so you can count on it reaching pretty much every foot of your home.  Maria Diaz/ZDNETI did encounter some bugs with the robot's OmniGrip performance, but I can't fault Roborock for them. Aside from the fact that no robot vacuum is perfect, these bugs can be attributed to the fact that this is really new technology. Some bugs included the robot only vacuuming and "forgetting" to resume mopping after dropping off an object, and dropping objects that were hard to grip, like kids' water shoes.ZDNET's buying advice Maria Diaz/ZDNETThe Roborock Saros Z70 isn't the right robot vacuum for most shoppers. Instead, this robot vacuum and mop is perfect for early adopters who enjoy testing the newest cutting-edge technologies. As the first robot vacuum with a mechanical arm to be widely available on the market, you can expect to encounter bugs with the Saros Z70 -- it's only natural.Also: This robot vacuum might be better at cleaning than me - and I'm a neat freakEven so, I was thoroughly impressed with the robot's cleaning performance and the OmniGrip technology. I was also impressed with Roborock's fast and widespread launch of this robot after announcing it late last year. The Roborock Saros Z70 is the next level in robot vacuum technology, and it's pioneering the idea of a functional, multipurpose household robot that you can truly rely on.However, it is quite expensive. The Saros Z70 will vacuum and mop like the best robot vacuums on the market. But you must be aware that you're not paying for a robot vacuum alone; you're paying for the innovation of having a future-forward robot in your home.  When will this deal expire? Deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on this deal, but don't fret -- we're constantly finding new chances to score savings and sharing them with you at ZDNET.com. 
    Show more
    Featured reviews
    #tested #viral #roborock #vacuum #with
    I tested the viral Roborock vacuum with a mechanical arm for a month - here's my verdict
    ZDNET's key takeaways The Roborock Saros Z70 is now available for purchase for The Saros Z70 is the first robot vacuum with a mechanical arm to lift lightweight objects and clean those missed areasThis robot vacuum performs impressively well, but you can expect some bugs with the OmniGrip mechanical arm function. View now at Roborock The Roborock Saros Z70 is currently on sale at Roborock for for Memorial Day, down from the usual price of I've spent the past few years of my life turning my home into the closest version of the Jetsons' house that I can get, bypassing the midcentury decor and flying cars. While I'm pleased to report that many of the predictions made by the 1960s sitcom have materialized over the decades, many remain unrealized. The biggest one? Rosie the Robot. Also: This midrange robot vacuum cleans as well as some flagship models - and it's 50% offThankfully, many companies are rallying behind the effort to create a household assistant robot. However, after being lucky enough to test the Roborock Saros Z70 with a mechanical arm, I believe Roborock has a definite edge on the competition. While other companies have created different kinds of household robots, the Saros Z70 is a multifunctional robot that could be a stepping stone to the future of smart homes.  details View at Us.roborock The Roborock Saros Z70 is a premium robot vacuum and mop with all the bells and whistles you'd expect from a flagship, plus a mechanical arm to pick up objects. And I can't dive into a review of this product without immediately focusing on this robotic grip. When the robot vacuum is cleaning, it detects small obstacles it can handle and picks them up. The robot then navigates to a predetermined area to drop off the item. Then, the device returns to the spot the object occupied and resumes cleaning the area.  Maria Diaz/ZDNETThe Saros Z70 comes with a Roborock bin that you can place in your home for your robot to drop soft items into. It's a rigid cardboard bin that looks like a small trash bin you'd see under a desk or in a bathroom. After your robot creates a virtual map of your home, you place the bin and you add it to the map in the Roborock mobile app. You can also add a larger area for your robot to drop off other items, like slippers and light shoes. The biggest question, of course, is: does the mechanical arm work as intended? After testing it in my home, I'm pleased to report that it does -- at least the vast majority of the time.Also: I invested in this 3-in-1 robot vacuum, and it's paying off for my homeTo test the OmniGrip mechanical arm, I set out ten obstacles around the house several times and ran full cleanings. I also did smaller area cleanings with fewer objects. The robot vacuum sees the object and gives a voice prompt to announce it's going to sort an item. It deploys the mechanical arm and lines itself up to pick up the item.  The Roborock Saros Z70's OmniGrip mechanical arm can be remotely controlled to pick up and drop off items at will. Maria Diaz/ZDNETOnce the arm grips the item, the robot travels to drop it off. It lines itself up with the bin or designated sorting area and releases the object, then retracts the arm. Also: My picks for the best robot vacuums for pet hair of 2025: Roomba, Eufy, Ecovacs, and moreIn my tests, the Roborock mechanical arm picked up the intended objects 83% of the time. This is a great number for a robot that is effectively introducing this type of technology to the market. It's also a great number when you consider that the robot's initial rollout has a very limited number of items it can recognize and pick up.  Maria Diaz/ZDNETRoborock says the Saros Z70 currently recognizes socks, sandals, crumpled tissues, and towels under 300g, and that new sortable objects will be added continuously via firmware updates. When I only used the recognizable objects, the robot gripped and relocated 90% of the items. When I added other small obstacles, like shoes, small cups, and plastic film, it gripped 75% of the objects.Also: This Ecovacs robot vacuum and mop is a sleeper hit, and it handles carpeting like a champAs a robot vacuum and mop, the Roborock Saros Z70's performance is outstanding -- I have zero qualms with it. It is one of the best robot vacuum and mop combos I've ever tested. It has the best obstacle avoidance feature I've seen thus far, so it doesn't get stuck on random objects, and it has an extendable mop pad to clean near edges. The robot also cleans quite thoroughly, much like the Saros 10 and Saros 10R, so you can count on it reaching pretty much every foot of your home.  Maria Diaz/ZDNETI did encounter some bugs with the robot's OmniGrip performance, but I can't fault Roborock for them. Aside from the fact that no robot vacuum is perfect, these bugs can be attributed to the fact that this is really new technology. Some bugs included the robot only vacuuming and "forgetting" to resume mopping after dropping off an object, and dropping objects that were hard to grip, like kids' water shoes.ZDNET's buying advice Maria Diaz/ZDNETThe Roborock Saros Z70 isn't the right robot vacuum for most shoppers. Instead, this robot vacuum and mop is perfect for early adopters who enjoy testing the newest cutting-edge technologies. As the first robot vacuum with a mechanical arm to be widely available on the market, you can expect to encounter bugs with the Saros Z70 -- it's only natural.Also: This robot vacuum might be better at cleaning than me - and I'm a neat freakEven so, I was thoroughly impressed with the robot's cleaning performance and the OmniGrip technology. I was also impressed with Roborock's fast and widespread launch of this robot after announcing it late last year. The Roborock Saros Z70 is the next level in robot vacuum technology, and it's pioneering the idea of a functional, multipurpose household robot that you can truly rely on.However, it is quite expensive. The Saros Z70 will vacuum and mop like the best robot vacuums on the market. But you must be aware that you're not paying for a robot vacuum alone; you're paying for the innovation of having a future-forward robot in your home.  When will this deal expire? Deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on this deal, but don't fret -- we're constantly finding new chances to score savings and sharing them with you at ZDNET.com.  Show more Featured reviews #tested #viral #roborock #vacuum #with
    I tested the viral Roborock vacuum with a mechanical arm for a month - here's my verdict
    www.zdnet.com
    ZDNET's key takeaways The Roborock Saros Z70 is now available for purchase for $2,599.The Saros Z70 is the first robot vacuum with a mechanical arm to lift lightweight objects and clean those missed areasThis robot vacuum performs impressively well, but you can expect some bugs with the OmniGrip mechanical arm function. View now at Roborock The Roborock Saros Z70 is currently on sale at Roborock for $1,999 for Memorial Day, down $600 from the usual price of $2,599.I've spent the past few years of my life turning my home into the closest version of the Jetsons' house that I can get, bypassing the midcentury decor and flying cars. While I'm pleased to report that many of the predictions made by the 1960s sitcom have materialized over the decades, many remain unrealized. The biggest one? Rosie the Robot. Also: This midrange robot vacuum cleans as well as some flagship models - and it's 50% offThankfully, many companies are rallying behind the effort to create a household assistant robot. However, after being lucky enough to test the Roborock Saros Z70 with a mechanical arm, I believe Roborock has a definite edge on the competition. While other companies have created different kinds of household robots, the Saros Z70 is a multifunctional robot that could be a stepping stone to the future of smart homes.  details View at Us.roborock The Roborock Saros Z70 is a premium robot vacuum and mop with all the bells and whistles you'd expect from a flagship, plus a mechanical arm to pick up objects. And I can't dive into a review of this product without immediately focusing on this robotic grip. When the robot vacuum is cleaning, it detects small obstacles it can handle and picks them up. The robot then navigates to a predetermined area to drop off the item. Then, the device returns to the spot the object occupied and resumes cleaning the area.  Maria Diaz/ZDNETThe Saros Z70 comes with a Roborock bin that you can place in your home for your robot to drop soft items into. It's a rigid cardboard bin that looks like a small trash bin you'd see under a desk or in a bathroom. After your robot creates a virtual map of your home, you place the bin and you add it to the map in the Roborock mobile app. You can also add a larger area for your robot to drop off other items, like slippers and light shoes. The biggest question, of course, is: does the mechanical arm work as intended? After testing it in my home, I'm pleased to report that it does -- at least the vast majority of the time.Also: I invested in this 3-in-1 robot vacuum, and it's paying off for my homeTo test the OmniGrip mechanical arm, I set out ten obstacles around the house several times and ran full cleanings. I also did smaller area cleanings with fewer objects. The robot vacuum sees the object and gives a voice prompt to announce it's going to sort an item. It deploys the mechanical arm and lines itself up to pick up the item.  The Roborock Saros Z70's OmniGrip mechanical arm can be remotely controlled to pick up and drop off items at will. Maria Diaz/ZDNETOnce the arm grips the item, the robot travels to drop it off. It lines itself up with the bin or designated sorting area and releases the object, then retracts the arm. Also: My picks for the best robot vacuums for pet hair of 2025: Roomba, Eufy, Ecovacs, and moreIn my tests, the Roborock mechanical arm picked up the intended objects 83% of the time. This is a great number for a robot that is effectively introducing this type of technology to the market. It's also a great number when you consider that the robot's initial rollout has a very limited number of items it can recognize and pick up.  Maria Diaz/ZDNETRoborock says the Saros Z70 currently recognizes socks, sandals, crumpled tissues, and towels under 300g (about eight ounces), and that new sortable objects will be added continuously via firmware updates. When I only used the recognizable objects, the robot gripped and relocated 90% of the items. When I added other small obstacles, like shoes, small cups, and plastic film, it gripped 75% of the objects.Also: This Ecovacs robot vacuum and mop is a sleeper hit, and it handles carpeting like a champAs a robot vacuum and mop, the Roborock Saros Z70's performance is outstanding -- I have zero qualms with it. It is one of the best robot vacuum and mop combos I've ever tested. It has the best obstacle avoidance feature I've seen thus far, so it doesn't get stuck on random objects, and it has an extendable mop pad to clean near edges. The robot also cleans quite thoroughly, much like the Saros 10 and Saros 10R, so you can count on it reaching pretty much every foot of your home.  Maria Diaz/ZDNETI did encounter some bugs with the robot's OmniGrip performance, but I can't fault Roborock for them. Aside from the fact that no robot vacuum is perfect (and this one nearly is), these bugs can be attributed to the fact that this is really new technology. Some bugs included the robot only vacuuming and "forgetting" to resume mopping after dropping off an object, and dropping objects that were hard to grip, like kids' water shoes.ZDNET's buying advice Maria Diaz/ZDNETThe Roborock Saros Z70 isn't the right robot vacuum for most shoppers. Instead, this robot vacuum and mop is perfect for early adopters who enjoy testing the newest cutting-edge technologies. As the first robot vacuum with a mechanical arm to be widely available on the market, you can expect to encounter bugs with the Saros Z70 -- it's only natural.Also: This robot vacuum might be better at cleaning than me - and I'm a neat freakEven so, I was thoroughly impressed with the robot's cleaning performance and the OmniGrip technology. I was also impressed with Roborock's fast and widespread launch of this robot after announcing it late last year. The Roborock Saros Z70 is the next level in robot vacuum technology, and it's pioneering the idea of a functional, multipurpose household robot that you can truly rely on.However, it is quite expensive. The Saros Z70 will vacuum and mop like the best robot vacuums on the market. But you must be aware that you're not paying for a robot vacuum alone; you're paying for the innovation of having a future-forward robot in your home.  When will this deal expire? Deals are subject to sell out or expire at any time, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We're sorry if you've missed out on this deal, but don't fret -- we're constantly finding new chances to score savings and sharing them with you at ZDNET.com.  Show more Featured reviews
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Is Science Slowing Down?

    Basic scientific research is a key contributor to economic productivity.getty
    Is science running out of steam? A growing body of research suggests that disruptive breakthroughs—the kind that fundamentally redefine entire fields—may be occurring less frequently. A 2023 article in Nature reported that scientific papers and patents are, on average, less “disruptive” than they were in the mid-20th century. The study sparked intense interest and considerable controversy, covered in a recent news feature provocatively titled “Are Groundbreaking Science Discoveries Becoming Harder To Find?”

    Before weighing in, however, it is worth interrogating a more fundamental question: What do we mean when we call science “disruptive”? And is that, in fact, the appropriate benchmark for progress?

    The study in question, led by entrepreneurship scholar Russell Funk, employs a citation-based metric known as the Consolidation–Disruptionindex. The tool attempts to quantify whether new research displaces prior work—a signal of disruption—or builds directly upon it, thereby reinforcing existing paradigms. It represents a noteworthy contribution to our understanding of scientific change. Their conclusion, that disruption has declined across disciplines even as the volume of scientific output has expanded, has ignited debate among scientists, scholars and policymakers.

    Innovation May Be Getting Harder—But Also Deeper
    At a structural level, science becomes more complex as it matures. In some sense it has to slow down. The simplest questions are often the first to be answered, and what remains are challenges that are more subtle, more interdependent, and more difficult to resolve. The law of diminishing marginal returns, long familiar in economics, finds a natural corollary in research: at some point the intellectual “low-hanging fruit” has largely been harvested.

    Yet this does not necessarily imply stagnation. In fact, science itself is evolving. I think that apparent declines in disruption reflect not an impoverishment of ideas, but a transformation in the conduct and culture of research itself. Citation practices have shifted. Publication incentives have changed. The sheer availability of data and digital resources has exploded. Comparing contemporary citation behavior to that of earlier decades is not simply apples to oranges; it’s more like comparing ecosystems separated by tectonic time.
    More profoundly, we might ask whether paradigm shifts—particularly those in the Kuhnian sense—are truly the milestones we should prize above all others. Much of the innovation that drives societal progress and economic productivity does not emerge from revolutions in thought, but from the subtle extension and application of existing knowledge. In fields as varied as biomedicine, agriculture, and climate science, incremental refinement has yielded results of transformative impact.Brighter green hybrid rice plantshelp increase yields at this Filipino farm.Getty Images

    Science Today Is More Sophisticated—And More Efficient
    Scientists are publishing more today than ever. Critics of contemporary science attribute this to metric-driven culture of “salami slicing,” in which ideas are fragmented into the “minimum publishable unit” so that scientists can accrue an ever-growing publication count to secure career viability in a publish-or-perish environment. But such critiques overlook the extraordinary gains in research efficiency that have occurred in the past few decades, which I think are a far more compelling explanation for the massive output of scientific research today.
    Since the 1980s, personal computing has transformed nearly every dimension of the scientific process. Manuscript preparation, once the province of typewriters and retyped drafts, has become seamless. Data acquisition now involves automated sensors and real-time monitoring. Analytical tools like Python and R allow researchers to conduct sophisticated modeling and statistics with unprecedented speed. Communication is instantaneous. Knowledge-sharing platforms and open-access journals have dismantled many of the old barriers to entry.Advances in microcomputer technology in the 1980s and 1990s dramatically accelerated scientific ... More research.Denver Post via Getty Images
    Indeed, one wonders whether critics have recently read a research paper from the 1930s or 1970s. The methodological rigor, analytical depth, and interdisciplinary scope of modern research are, by nearly any standard, vastly more advanced.
    The Horizon Has Expanded
    In biology alone, high-throughput technologies—part of the broader “omics” revolution catalyzed by innovations like the polymerase chain reaction, which enabled rapid DNA amplification and supported the eventual success of the Human Genome Project—continue to propel discovery at an astonishing pace.Nobel Prize laureate James D. Watson speaks at a press conference to announce that a six-country ... More consortium has successfully drawn up a complete map of the human genome, completing one of the most ambitious scientific projects ever and offering a major opportunity for medical advances, 14 April 2003 at the National Institute of Health in Bethesda, Maryland. The announcement coincides with the 50th anniversary of the publication of the landmark paper describing DNA's double helix by Watson and Francis Crick. AFP PHOTO / Robyn BECKAFP via Getty Images
    When critics lament the apparent decline of Nobel-caliber “blockbusters” they overlook that the frontier of science has expanded—not narrowed. If we consider scientific knowledge as a volume, then it is bounded by an outer edge where discovery occurs. In Euclidean geometry, as the radius of a sphere increases, the surface areagrows more slowly than the volume. While the volume of knowledge grows more rapidly—encompassing established theories and tools that continue to yield applications—the surface area also expands, and it is along this widening frontier, where the known meets the unknown, that innovation arises.
    Rethinking Returns on Investment
    The modern belief that science must deliver measurable economic returns is, historically speaking, a relatively recent development. Before the Second World War, scientific research was not broadly viewed as a driver of productivity. Economist Daniel Susskind has argued that even the concept of economic growth as a central policy goal is a mid-20th century invention.
    After the war, that changed dramatically. Governments began to see research as critical to national development, security, and public health. Yet even as expectations have grown, relative public investment in science has, paradoxically, diminished, despite the fact that basic scientific research is a massive accelerant of economic productivity and effectively self-financing. While absolute funding has increased, government spending on science as a share of GDP has declined in the US and many other countries. Given the scale and complexity of the challenges we now face, we may be underinvesting in the very enterprise that could deliver solutions. Recent proposals to cut funding for NIH and NSF could, by some estimates, cost the U.S. tens of billions in lost productivity.
    There is compelling evidence to suggest that significantly increasing R&D expenditures—doubling or even tripling them—would yield strong and sustained returns.
    AI and the Next Wave of Scientific Efficiency
    Looking to the future, artificial intelligence offers the potential to not only streamline research but also to augment the process of innovation itself. AI tools—from large language models like ChatGPT to specialized engines for data mining and synthesis—enable researchers to traverse disciplines, identify patterns, and generate new hypotheses with remarkable speed.
    The ability to navigate vast bodies of scientific literature—once reserved for those with access to elite research libraries and ample time for reading—has been radically democratized. Scientists today can access digitized repositories, annotate papers with precision tools, manage bibliographies with software, and instantly trace the intellectual lineage of ideas. AI-powered tools support researchers in sifting through and synthesizing material across disciplines, helping to identify patterns, highlight connections, and bring under-explored ideas into view. For researchers like myself—an ecologist who often draws inspiration from nonlinear dynamics, statistical physics, and cognitive psychology—these technologies function as accelerators of thought rather than substitutes for it. They support the process of discovering latent analogies and assembling novel constellations of insight, the kind of cognitive recombination that underlies true creativity. While deep understanding still demands sustained intellectual engagement—reading, interpretation, and critical analysis—these tools lower the barrier to discovery and expand the range of intellectual possibilities.
    By enhancing cross-disciplinary thinking and reducing the latency between idea and investigation, AI may well reignite the kind of scientific innovation that some believe is slipping from reach.
    Science as a Cultural Endeavor
    Finally, it bears emphasizing that the value of science is not solely, or even primarily, economic. Like the arts, literature, or philosophy, science is a cultural and intellectual enterprise. It is an expression of curiosity, a vehicle for collective self-understanding, and a means of situating ourselves within the universe.
    From my vantage point, and that of many colleagues, the current landscape of discovery feels more fertile than ever. The questions we pose are more ambitious, the tools at our disposal more refined, and the connections we are able to make more multidimensional.
    If the signal of disruption appears to be dimming, perhaps it is only because the spectrum of science has grown too broad for any single wavelength to dominate. Rather than lament an apparent slowdown, we might ask a more constructive question: Are we measuring the right things? And are we creating the conditions that allow the most vital forms of science—creative, integrative, and with the potential to transform human society for the better—to flourish?
    #science #slowing #down
    Is Science Slowing Down?
    Basic scientific research is a key contributor to economic productivity.getty Is science running out of steam? A growing body of research suggests that disruptive breakthroughs—the kind that fundamentally redefine entire fields—may be occurring less frequently. A 2023 article in Nature reported that scientific papers and patents are, on average, less “disruptive” than they were in the mid-20th century. The study sparked intense interest and considerable controversy, covered in a recent news feature provocatively titled “Are Groundbreaking Science Discoveries Becoming Harder To Find?” Before weighing in, however, it is worth interrogating a more fundamental question: What do we mean when we call science “disruptive”? And is that, in fact, the appropriate benchmark for progress? The study in question, led by entrepreneurship scholar Russell Funk, employs a citation-based metric known as the Consolidation–Disruptionindex. The tool attempts to quantify whether new research displaces prior work—a signal of disruption—or builds directly upon it, thereby reinforcing existing paradigms. It represents a noteworthy contribution to our understanding of scientific change. Their conclusion, that disruption has declined across disciplines even as the volume of scientific output has expanded, has ignited debate among scientists, scholars and policymakers. Innovation May Be Getting Harder—But Also Deeper At a structural level, science becomes more complex as it matures. In some sense it has to slow down. The simplest questions are often the first to be answered, and what remains are challenges that are more subtle, more interdependent, and more difficult to resolve. The law of diminishing marginal returns, long familiar in economics, finds a natural corollary in research: at some point the intellectual “low-hanging fruit” has largely been harvested. Yet this does not necessarily imply stagnation. In fact, science itself is evolving. I think that apparent declines in disruption reflect not an impoverishment of ideas, but a transformation in the conduct and culture of research itself. Citation practices have shifted. Publication incentives have changed. The sheer availability of data and digital resources has exploded. Comparing contemporary citation behavior to that of earlier decades is not simply apples to oranges; it’s more like comparing ecosystems separated by tectonic time. More profoundly, we might ask whether paradigm shifts—particularly those in the Kuhnian sense—are truly the milestones we should prize above all others. Much of the innovation that drives societal progress and economic productivity does not emerge from revolutions in thought, but from the subtle extension and application of existing knowledge. In fields as varied as biomedicine, agriculture, and climate science, incremental refinement has yielded results of transformative impact.Brighter green hybrid rice plantshelp increase yields at this Filipino farm.Getty Images Science Today Is More Sophisticated—And More Efficient Scientists are publishing more today than ever. Critics of contemporary science attribute this to metric-driven culture of “salami slicing,” in which ideas are fragmented into the “minimum publishable unit” so that scientists can accrue an ever-growing publication count to secure career viability in a publish-or-perish environment. But such critiques overlook the extraordinary gains in research efficiency that have occurred in the past few decades, which I think are a far more compelling explanation for the massive output of scientific research today. Since the 1980s, personal computing has transformed nearly every dimension of the scientific process. Manuscript preparation, once the province of typewriters and retyped drafts, has become seamless. Data acquisition now involves automated sensors and real-time monitoring. Analytical tools like Python and R allow researchers to conduct sophisticated modeling and statistics with unprecedented speed. Communication is instantaneous. Knowledge-sharing platforms and open-access journals have dismantled many of the old barriers to entry.Advances in microcomputer technology in the 1980s and 1990s dramatically accelerated scientific ... More research.Denver Post via Getty Images Indeed, one wonders whether critics have recently read a research paper from the 1930s or 1970s. The methodological rigor, analytical depth, and interdisciplinary scope of modern research are, by nearly any standard, vastly more advanced. The Horizon Has Expanded In biology alone, high-throughput technologies—part of the broader “omics” revolution catalyzed by innovations like the polymerase chain reaction, which enabled rapid DNA amplification and supported the eventual success of the Human Genome Project—continue to propel discovery at an astonishing pace.Nobel Prize laureate James D. Watson speaks at a press conference to announce that a six-country ... More consortium has successfully drawn up a complete map of the human genome, completing one of the most ambitious scientific projects ever and offering a major opportunity for medical advances, 14 April 2003 at the National Institute of Health in Bethesda, Maryland. The announcement coincides with the 50th anniversary of the publication of the landmark paper describing DNA's double helix by Watson and Francis Crick. AFP PHOTO / Robyn BECKAFP via Getty Images When critics lament the apparent decline of Nobel-caliber “blockbusters” they overlook that the frontier of science has expanded—not narrowed. If we consider scientific knowledge as a volume, then it is bounded by an outer edge where discovery occurs. In Euclidean geometry, as the radius of a sphere increases, the surface areagrows more slowly than the volume. While the volume of knowledge grows more rapidly—encompassing established theories and tools that continue to yield applications—the surface area also expands, and it is along this widening frontier, where the known meets the unknown, that innovation arises. Rethinking Returns on Investment The modern belief that science must deliver measurable economic returns is, historically speaking, a relatively recent development. Before the Second World War, scientific research was not broadly viewed as a driver of productivity. Economist Daniel Susskind has argued that even the concept of economic growth as a central policy goal is a mid-20th century invention. After the war, that changed dramatically. Governments began to see research as critical to national development, security, and public health. Yet even as expectations have grown, relative public investment in science has, paradoxically, diminished, despite the fact that basic scientific research is a massive accelerant of economic productivity and effectively self-financing. While absolute funding has increased, government spending on science as a share of GDP has declined in the US and many other countries. Given the scale and complexity of the challenges we now face, we may be underinvesting in the very enterprise that could deliver solutions. Recent proposals to cut funding for NIH and NSF could, by some estimates, cost the U.S. tens of billions in lost productivity. There is compelling evidence to suggest that significantly increasing R&D expenditures—doubling or even tripling them—would yield strong and sustained returns. AI and the Next Wave of Scientific Efficiency Looking to the future, artificial intelligence offers the potential to not only streamline research but also to augment the process of innovation itself. AI tools—from large language models like ChatGPT to specialized engines for data mining and synthesis—enable researchers to traverse disciplines, identify patterns, and generate new hypotheses with remarkable speed. The ability to navigate vast bodies of scientific literature—once reserved for those with access to elite research libraries and ample time for reading—has been radically democratized. Scientists today can access digitized repositories, annotate papers with precision tools, manage bibliographies with software, and instantly trace the intellectual lineage of ideas. AI-powered tools support researchers in sifting through and synthesizing material across disciplines, helping to identify patterns, highlight connections, and bring under-explored ideas into view. For researchers like myself—an ecologist who often draws inspiration from nonlinear dynamics, statistical physics, and cognitive psychology—these technologies function as accelerators of thought rather than substitutes for it. They support the process of discovering latent analogies and assembling novel constellations of insight, the kind of cognitive recombination that underlies true creativity. While deep understanding still demands sustained intellectual engagement—reading, interpretation, and critical analysis—these tools lower the barrier to discovery and expand the range of intellectual possibilities. By enhancing cross-disciplinary thinking and reducing the latency between idea and investigation, AI may well reignite the kind of scientific innovation that some believe is slipping from reach. Science as a Cultural Endeavor Finally, it bears emphasizing that the value of science is not solely, or even primarily, economic. Like the arts, literature, or philosophy, science is a cultural and intellectual enterprise. It is an expression of curiosity, a vehicle for collective self-understanding, and a means of situating ourselves within the universe. From my vantage point, and that of many colleagues, the current landscape of discovery feels more fertile than ever. The questions we pose are more ambitious, the tools at our disposal more refined, and the connections we are able to make more multidimensional. If the signal of disruption appears to be dimming, perhaps it is only because the spectrum of science has grown too broad for any single wavelength to dominate. Rather than lament an apparent slowdown, we might ask a more constructive question: Are we measuring the right things? And are we creating the conditions that allow the most vital forms of science—creative, integrative, and with the potential to transform human society for the better—to flourish? #science #slowing #down
    Is Science Slowing Down?
    www.forbes.com
    Basic scientific research is a key contributor to economic productivity.getty Is science running out of steam? A growing body of research suggests that disruptive breakthroughs—the kind that fundamentally redefine entire fields—may be occurring less frequently. A 2023 article in Nature reported that scientific papers and patents are, on average, less “disruptive” than they were in the mid-20th century. The study sparked intense interest and considerable controversy, covered in a recent news feature provocatively titled “Are Groundbreaking Science Discoveries Becoming Harder To Find?” Before weighing in, however, it is worth interrogating a more fundamental question: What do we mean when we call science “disruptive”? And is that, in fact, the appropriate benchmark for progress? The study in question, led by entrepreneurship scholar Russell Funk, employs a citation-based metric known as the Consolidation–Disruption (CD) index. The tool attempts to quantify whether new research displaces prior work—a signal of disruption—or builds directly upon it, thereby reinforcing existing paradigms. It represents a noteworthy contribution to our understanding of scientific change. Their conclusion, that disruption has declined across disciplines even as the volume of scientific output has expanded, has ignited debate among scientists, scholars and policymakers. Innovation May Be Getting Harder—But Also Deeper At a structural level, science becomes more complex as it matures. In some sense it has to slow down. The simplest questions are often the first to be answered, and what remains are challenges that are more subtle, more interdependent, and more difficult to resolve. The law of diminishing marginal returns, long familiar in economics, finds a natural corollary in research: at some point the intellectual “low-hanging fruit” has largely been harvested. Yet this does not necessarily imply stagnation. In fact, science itself is evolving. I think that apparent declines in disruption reflect not an impoverishment of ideas, but a transformation in the conduct and culture of research itself. Citation practices have shifted. Publication incentives have changed. The sheer availability of data and digital resources has exploded. Comparing contemporary citation behavior to that of earlier decades is not simply apples to oranges; it’s more like comparing ecosystems separated by tectonic time. More profoundly, we might ask whether paradigm shifts—particularly those in the Kuhnian sense—are truly the milestones we should prize above all others. Much of the innovation that drives societal progress and economic productivity does not emerge from revolutions in thought, but from the subtle extension and application of existing knowledge. In fields as varied as biomedicine, agriculture, and climate science, incremental refinement has yielded results of transformative impact.Brighter green hybrid rice plants (left) help increase yields at this Filipino farm. (Photo by ... More Dick Swanson/Getty Images)Getty Images Science Today Is More Sophisticated—And More Efficient Scientists are publishing more today than ever. Critics of contemporary science attribute this to metric-driven culture of “salami slicing,” in which ideas are fragmented into the “minimum publishable unit” so that scientists can accrue an ever-growing publication count to secure career viability in a publish-or-perish environment. But such critiques overlook the extraordinary gains in research efficiency that have occurred in the past few decades, which I think are a far more compelling explanation for the massive output of scientific research today. Since the 1980s, personal computing has transformed nearly every dimension of the scientific process. Manuscript preparation, once the province of typewriters and retyped drafts, has become seamless. Data acquisition now involves automated sensors and real-time monitoring. Analytical tools like Python and R allow researchers to conduct sophisticated modeling and statistics with unprecedented speed. Communication is instantaneous. Knowledge-sharing platforms and open-access journals have dismantled many of the old barriers to entry.Advances in microcomputer technology in the 1980s and 1990s dramatically accelerated scientific ... More research.Denver Post via Getty Images Indeed, one wonders whether critics have recently read a research paper from the 1930s or 1970s. The methodological rigor, analytical depth, and interdisciplinary scope of modern research are, by nearly any standard, vastly more advanced. The Horizon Has Expanded In biology alone, high-throughput technologies—part of the broader “omics” revolution catalyzed by innovations like the polymerase chain reaction (PCR), which enabled rapid DNA amplification and supported the eventual success of the Human Genome Project—continue to propel discovery at an astonishing pace.Nobel Prize laureate James D. Watson speaks at a press conference to announce that a six-country ... More consortium has successfully drawn up a complete map of the human genome, completing one of the most ambitious scientific projects ever and offering a major opportunity for medical advances, 14 April 2003 at the National Institute of Health in Bethesda, Maryland. The announcement coincides with the 50th anniversary of the publication of the landmark paper describing DNA's double helix by Watson and Francis Crick. AFP PHOTO / Robyn BECK (Photo credit should read ROBYN BECK/AFP via Getty Images)AFP via Getty Images When critics lament the apparent decline of Nobel-caliber “blockbusters” they overlook that the frontier of science has expanded—not narrowed. If we consider scientific knowledge as a volume, then it is bounded by an outer edge where discovery occurs. In Euclidean geometry, as the radius of a sphere increases, the surface area (scaling with the square of the radius) grows more slowly than the volume (which scales with the cube). While the volume of knowledge grows more rapidly—encompassing established theories and tools that continue to yield applications—the surface area also expands, and it is along this widening frontier, where the known meets the unknown, that innovation arises. Rethinking Returns on Investment The modern belief that science must deliver measurable economic returns is, historically speaking, a relatively recent development. Before the Second World War, scientific research was not broadly viewed as a driver of productivity. Economist Daniel Susskind has argued that even the concept of economic growth as a central policy goal is a mid-20th century invention. After the war, that changed dramatically. Governments began to see research as critical to national development, security, and public health. Yet even as expectations have grown, relative public investment in science has, paradoxically, diminished, despite the fact that basic scientific research is a massive accelerant of economic productivity and effectively self-financing. While absolute funding has increased, government spending on science as a share of GDP has declined in the US and many other countries. Given the scale and complexity of the challenges we now face, we may be underinvesting in the very enterprise that could deliver solutions. Recent proposals to cut funding for NIH and NSF could, by some estimates, cost the U.S. tens of billions in lost productivity. There is compelling evidence to suggest that significantly increasing R&D expenditures—doubling or even tripling them—would yield strong and sustained returns. AI and the Next Wave of Scientific Efficiency Looking to the future, artificial intelligence offers the potential to not only streamline research but also to augment the process of innovation itself. AI tools—from large language models like ChatGPT to specialized engines for data mining and synthesis—enable researchers to traverse disciplines, identify patterns, and generate new hypotheses with remarkable speed. The ability to navigate vast bodies of scientific literature—once reserved for those with access to elite research libraries and ample time for reading—has been radically democratized. Scientists today can access digitized repositories, annotate papers with precision tools, manage bibliographies with software, and instantly trace the intellectual lineage of ideas. AI-powered tools support researchers in sifting through and synthesizing material across disciplines, helping to identify patterns, highlight connections, and bring under-explored ideas into view. For researchers like myself—an ecologist who often draws inspiration from nonlinear dynamics, statistical physics, and cognitive psychology—these technologies function as accelerators of thought rather than substitutes for it. They support the process of discovering latent analogies and assembling novel constellations of insight, the kind of cognitive recombination that underlies true creativity. While deep understanding still demands sustained intellectual engagement—reading, interpretation, and critical analysis—these tools lower the barrier to discovery and expand the range of intellectual possibilities. By enhancing cross-disciplinary thinking and reducing the latency between idea and investigation, AI may well reignite the kind of scientific innovation that some believe is slipping from reach. Science as a Cultural Endeavor Finally, it bears emphasizing that the value of science is not solely, or even primarily, economic. Like the arts, literature, or philosophy, science is a cultural and intellectual enterprise. It is an expression of curiosity, a vehicle for collective self-understanding, and a means of situating ourselves within the universe. From my vantage point, and that of many colleagues, the current landscape of discovery feels more fertile than ever. The questions we pose are more ambitious, the tools at our disposal more refined, and the connections we are able to make more multidimensional. If the signal of disruption appears to be dimming, perhaps it is only because the spectrum of science has grown too broad for any single wavelength to dominate. Rather than lament an apparent slowdown, we might ask a more constructive question: Are we measuring the right things? And are we creating the conditions that allow the most vital forms of science—creative, integrative, and with the potential to transform human society for the better—to flourish?
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • What to Know About the Kids Online Safety Act and Where It Currently Stands

    Congress could potentially pass the first major legislation related to children’s online safety since 1998, as the Kids Online Safety Act, sometimes referred to as KOSA, was reintroduced earlier this month after stalling last year.The bill has proven to be a major talking point, garnering bipartisan support and the attention of tech giants, but it has also sparked concern re: targeted censorship from First Amendment rights groups and others advocating for LGBTQ+ communities.Now, it will have another shot, and the bill’s Congressional supporters will have a chance to state why they believe the legislation is needed in this ever-evolving digital age.The revival of the Kids Online Safety Act comes amid U.S. and global discussions over how to best protect children online. In late 2024, Australia approved a social media ban for under-16s. It’s set to come into effect later this year. In March, Utah became the first state to pass legislation requiring app stores to verify a user's age. And Texas is currently moving forward with efforts regarding an expansive social media ban for minors. The Kids Off Social Media Act—which would ban social media platforms from allowing children under 13 to create or maintain accounts—was also introduced earlier this year, but has seen little movement since.In an interview that aired on NBC’s Meet the Press on Sunday, May 25, during a special mental health-focused episode, former Rep. Patrick J. Kennedy, a Democrat who served Rhode Island, expressed a dire need for more protections surrounding children online.When asked about the Kids Online Safety Act, and if it’s the type of legislation America needs, Kennedy said: “Our country is falling down on its own responsibility as stewards to our children's future.” He went on to explain why he believes passing bills is just one factor of what needs to be addressed, citing online sports betting as another major concern.“We can't just pass these bills. We've got to stop all of these intrusive addiction-for-profit companies from taking our kids hostage. That's what they're doing. This is a fight,” he said. “And we are losing the fight because we're not out there fighting for our kids to protect them from these businesseswhole profit motive is, ‘How am I going to capture that consumer and lock them in as a consumer?’”Calling out giant social media platforms, in particular, Kennedy went on to say: “We, as a country, have seen these companies and industries take advantage of the addiction-for-profit. Purdue, tobacco. Social media's the next big one. And unfortunately, it's going to have to be litigated. We have to go after the devastating impact that these companies are having on our kids.”Amid these ongoing discussions, here’s what you need to know about the Kids Online Safety Act in light of its reintroduction.What is the Kids Online Safety Act?The Kids Online Safety Act aims to provide further protections for children online related to privacy and mental health concerns exacerbated by social media and excessive Internet use.The bill would create “duty of care,” meaning that tech companies and platform giants would be required to take steps to prevent potentially harmful encounters, such as posts about eating disorders and instances of online bullying, from impacting minors.“A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors... patterns of use that indicate or encourage addiction-like behaviors by minors…” the bill reads.Health organizations including The American Academy of Pediatrics and the American Psychological Association, have pushed Congress to pass KOSA to better protect young people online—and see the bill as a potential way to intervene with the detrimental impact social media and Internet usage in general can have on one’s mental health. Newer versions of the bill have narrowed regulations to apply to limiting “design features” such as notifications, “infinite scrolling or autoplay,” and in-game purchases.It would also allow for more parental tools to manage the privacy settings of a minor, and ideally enable a parent to limit the ability for adults to communicate with their children via online platforms.What is the history of the bill? In 2024, KOSA seemingly had all the right ingredients to pass into law. It had bipartisan support, passed the Senate, and could have been put in front of President Joe Biden, who had indicated he would sign the bill.“There is undeniable evidence that social media and other online platforms contribute to our youth mental health crisis,” President Biden wrote in a statement on July 30, 2024, after KOSA passed the Senate. “Today our children are subjected to a wild west online and our current laws and regulations are insufficient to prevent this. It is past time to act.”Yet, the bill was stalled. House Speaker Mike Johnson cautioned Republicans against rushing to pass the bill.“We’ve got to get it right,” Johnson said in December. “Look, I’m a lifelong advocate of protection of children…and online safety is critically important…but we also have to make sure that we don't open the door for violations of free speech.”The bill received support across both aisles, and has now been endorsed by some of the “Big Tech giants” it aims to regulate, including Elon Musk and X, Microsoft, and Apple.“Apple is pleased to offer our support for the Kids Online Safety Act. Everyone has a part to play in keeping kids safe online, and we believelegislation will have a meaningful impact on children’s online safety,” Timothy Powderly, Apple’s senior director of government affairs, said in a statement earlier in May after the bill was reintroduced.But other tech giants, including Facebook and Instagram’s parent Meta, opposed the bill last year. Politico reported that 14 lobbyists employed directly by Meta, as well as outside firms, worked the issue.The bill was reintroduced on May 14 by Republican Sen. Marsha Blackburn and Democratic Sen. Richard Blumenthal, who were joined by Senate Majority Leader John Thune and Senate Minority Leader Chuck Schumer.“Senator Blackburn and I made a promise to parents and young people when we started fighting together for the Kids Online Safety Act—we will make this bill law. There’s undeniable awareness of the destructive harms caused by Big Tech’s exploitative, addictive algorithms, and inescapable momentum for reform,” said Blumenthal in a statement announcing the bill’s reintroduction. “I am grateful to Senators Thune and Schumer for their leadership and to our Senate colleagues for their overwhelming bipartisan support. KOSA is an idea whose time has come—in fact, it’s urgently overdue—and even tech companies like X and Apple are realizing that the status quo is unsustainable.What is the controversy around KOSA?Since KOSA’s first introduction, it’s been the site of controversy over free speech and censorship concerns. In 2024, the American Civil Liberties Uniondiscouraged the passage of KOSA at the Senate level, arguing that the bill violated First Amendment-protected speech.“KOSA compounds nationwide attacks on young peoples’ right to learn and access information, on and offline,” said Jenna Leventoff, senior policy counsel at the ACLU. “As state legislatures and school boards across the country impose book bans and classroom censorship laws, the last thing students and parents need is another act of government censorship deciding which educational resources are appropriate for their families. The House must block this dangerous bill before it’s too late.”Some LGBTQ+ rights groups also opposed KOSA in 2024—arguing that the broadly worded bill could empower state attorneys general to determine what kind of content harms kids. One of the bill’s co-sponsors, Blackburn, has previously said that one of the top issues conservatives need to be aware of is “protecting minor children from the transgender in this culture and that influence.” Calling out social media, Blackburn said “this is where children are being indoctrinated.”Other organizations including Center for Democracy & Technology, New America’s Open Technology Institute, and Fight for the Future joined the ACLU in writing a letter to the House Energy and Commerce Committee in 2024, arguing that the bill would not—as intended—protect children, but instead threaten young people’s privacy and lead to censorship.In response to these concerns, the newly-introduced version of the bill has been negotiated with “several changes to further make clear that KOSA would not censor, limit, or remove any content from the internet, and it does not give the FTCor state Attorneys General the power to bring lawsuits over content or speech,” Blumenthal’s statement on the bill reads.Where do things currently stand?Now, KOSA is back where it started—sitting in Congress waiting for support.With its new changes, lawmakers argue that they have heard the concerns of opposing advocates. KOSA still needs support and passage from Congress—and signing from President Donald Trump—in order to pass into law.Trump’s son, Donald Trump Jr., has previously voiced strong support of the bill. “We can protect free speech and our kids at the same time from Big Tech. It's time for House Republicans to pass the Kids Online Safety Act ASAP,” Trump Jr. said on X on Dec. 8, 2024.
    #what #know #about #kids #online
    What to Know About the Kids Online Safety Act and Where It Currently Stands
    Congress could potentially pass the first major legislation related to children’s online safety since 1998, as the Kids Online Safety Act, sometimes referred to as KOSA, was reintroduced earlier this month after stalling last year.The bill has proven to be a major talking point, garnering bipartisan support and the attention of tech giants, but it has also sparked concern re: targeted censorship from First Amendment rights groups and others advocating for LGBTQ+ communities.Now, it will have another shot, and the bill’s Congressional supporters will have a chance to state why they believe the legislation is needed in this ever-evolving digital age.The revival of the Kids Online Safety Act comes amid U.S. and global discussions over how to best protect children online. In late 2024, Australia approved a social media ban for under-16s. It’s set to come into effect later this year. In March, Utah became the first state to pass legislation requiring app stores to verify a user's age. And Texas is currently moving forward with efforts regarding an expansive social media ban for minors. The Kids Off Social Media Act—which would ban social media platforms from allowing children under 13 to create or maintain accounts—was also introduced earlier this year, but has seen little movement since.In an interview that aired on NBC’s Meet the Press on Sunday, May 25, during a special mental health-focused episode, former Rep. Patrick J. Kennedy, a Democrat who served Rhode Island, expressed a dire need for more protections surrounding children online.When asked about the Kids Online Safety Act, and if it’s the type of legislation America needs, Kennedy said: “Our country is falling down on its own responsibility as stewards to our children's future.” He went on to explain why he believes passing bills is just one factor of what needs to be addressed, citing online sports betting as another major concern.“We can't just pass these bills. We've got to stop all of these intrusive addiction-for-profit companies from taking our kids hostage. That's what they're doing. This is a fight,” he said. “And we are losing the fight because we're not out there fighting for our kids to protect them from these businesseswhole profit motive is, ‘How am I going to capture that consumer and lock them in as a consumer?’”Calling out giant social media platforms, in particular, Kennedy went on to say: “We, as a country, have seen these companies and industries take advantage of the addiction-for-profit. Purdue, tobacco. Social media's the next big one. And unfortunately, it's going to have to be litigated. We have to go after the devastating impact that these companies are having on our kids.”Amid these ongoing discussions, here’s what you need to know about the Kids Online Safety Act in light of its reintroduction.What is the Kids Online Safety Act?The Kids Online Safety Act aims to provide further protections for children online related to privacy and mental health concerns exacerbated by social media and excessive Internet use.The bill would create “duty of care,” meaning that tech companies and platform giants would be required to take steps to prevent potentially harmful encounters, such as posts about eating disorders and instances of online bullying, from impacting minors.“A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors... patterns of use that indicate or encourage addiction-like behaviors by minors…” the bill reads.Health organizations including The American Academy of Pediatrics and the American Psychological Association, have pushed Congress to pass KOSA to better protect young people online—and see the bill as a potential way to intervene with the detrimental impact social media and Internet usage in general can have on one’s mental health. Newer versions of the bill have narrowed regulations to apply to limiting “design features” such as notifications, “infinite scrolling or autoplay,” and in-game purchases.It would also allow for more parental tools to manage the privacy settings of a minor, and ideally enable a parent to limit the ability for adults to communicate with their children via online platforms.What is the history of the bill? In 2024, KOSA seemingly had all the right ingredients to pass into law. It had bipartisan support, passed the Senate, and could have been put in front of President Joe Biden, who had indicated he would sign the bill.“There is undeniable evidence that social media and other online platforms contribute to our youth mental health crisis,” President Biden wrote in a statement on July 30, 2024, after KOSA passed the Senate. “Today our children are subjected to a wild west online and our current laws and regulations are insufficient to prevent this. It is past time to act.”Yet, the bill was stalled. House Speaker Mike Johnson cautioned Republicans against rushing to pass the bill.“We’ve got to get it right,” Johnson said in December. “Look, I’m a lifelong advocate of protection of children…and online safety is critically important…but we also have to make sure that we don't open the door for violations of free speech.”The bill received support across both aisles, and has now been endorsed by some of the “Big Tech giants” it aims to regulate, including Elon Musk and X, Microsoft, and Apple.“Apple is pleased to offer our support for the Kids Online Safety Act. Everyone has a part to play in keeping kids safe online, and we believelegislation will have a meaningful impact on children’s online safety,” Timothy Powderly, Apple’s senior director of government affairs, said in a statement earlier in May after the bill was reintroduced.But other tech giants, including Facebook and Instagram’s parent Meta, opposed the bill last year. Politico reported that 14 lobbyists employed directly by Meta, as well as outside firms, worked the issue.The bill was reintroduced on May 14 by Republican Sen. Marsha Blackburn and Democratic Sen. Richard Blumenthal, who were joined by Senate Majority Leader John Thune and Senate Minority Leader Chuck Schumer.“Senator Blackburn and I made a promise to parents and young people when we started fighting together for the Kids Online Safety Act—we will make this bill law. There’s undeniable awareness of the destructive harms caused by Big Tech’s exploitative, addictive algorithms, and inescapable momentum for reform,” said Blumenthal in a statement announcing the bill’s reintroduction. “I am grateful to Senators Thune and Schumer for their leadership and to our Senate colleagues for their overwhelming bipartisan support. KOSA is an idea whose time has come—in fact, it’s urgently overdue—and even tech companies like X and Apple are realizing that the status quo is unsustainable.What is the controversy around KOSA?Since KOSA’s first introduction, it’s been the site of controversy over free speech and censorship concerns. In 2024, the American Civil Liberties Uniondiscouraged the passage of KOSA at the Senate level, arguing that the bill violated First Amendment-protected speech.“KOSA compounds nationwide attacks on young peoples’ right to learn and access information, on and offline,” said Jenna Leventoff, senior policy counsel at the ACLU. “As state legislatures and school boards across the country impose book bans and classroom censorship laws, the last thing students and parents need is another act of government censorship deciding which educational resources are appropriate for their families. The House must block this dangerous bill before it’s too late.”Some LGBTQ+ rights groups also opposed KOSA in 2024—arguing that the broadly worded bill could empower state attorneys general to determine what kind of content harms kids. One of the bill’s co-sponsors, Blackburn, has previously said that one of the top issues conservatives need to be aware of is “protecting minor children from the transgender in this culture and that influence.” Calling out social media, Blackburn said “this is where children are being indoctrinated.”Other organizations including Center for Democracy & Technology, New America’s Open Technology Institute, and Fight for the Future joined the ACLU in writing a letter to the House Energy and Commerce Committee in 2024, arguing that the bill would not—as intended—protect children, but instead threaten young people’s privacy and lead to censorship.In response to these concerns, the newly-introduced version of the bill has been negotiated with “several changes to further make clear that KOSA would not censor, limit, or remove any content from the internet, and it does not give the FTCor state Attorneys General the power to bring lawsuits over content or speech,” Blumenthal’s statement on the bill reads.Where do things currently stand?Now, KOSA is back where it started—sitting in Congress waiting for support.With its new changes, lawmakers argue that they have heard the concerns of opposing advocates. KOSA still needs support and passage from Congress—and signing from President Donald Trump—in order to pass into law.Trump’s son, Donald Trump Jr., has previously voiced strong support of the bill. “We can protect free speech and our kids at the same time from Big Tech. It's time for House Republicans to pass the Kids Online Safety Act ASAP,” Trump Jr. said on X on Dec. 8, 2024. #what #know #about #kids #online
    What to Know About the Kids Online Safety Act and Where It Currently Stands
    time.com
    Congress could potentially pass the first major legislation related to children’s online safety since 1998, as the Kids Online Safety Act, sometimes referred to as KOSA, was reintroduced earlier this month after stalling last year.The bill has proven to be a major talking point, garnering bipartisan support and the attention of tech giants, but it has also sparked concern re: targeted censorship from First Amendment rights groups and others advocating for LGBTQ+ communities.Now, it will have another shot, and the bill’s Congressional supporters will have a chance to state why they believe the legislation is needed in this ever-evolving digital age.The revival of the Kids Online Safety Act comes amid U.S. and global discussions over how to best protect children online. In late 2024, Australia approved a social media ban for under-16s. It’s set to come into effect later this year. In March, Utah became the first state to pass legislation requiring app stores to verify a user's age. And Texas is currently moving forward with efforts regarding an expansive social media ban for minors. The Kids Off Social Media Act (KOSMA)—which would ban social media platforms from allowing children under 13 to create or maintain accounts—was also introduced earlier this year, but has seen little movement since.In an interview that aired on NBC’s Meet the Press on Sunday, May 25, during a special mental health-focused episode, former Rep. Patrick J. Kennedy, a Democrat who served Rhode Island, expressed a dire need for more protections surrounding children online.When asked about the Kids Online Safety Act, and if it’s the type of legislation America needs, Kennedy said: “Our country is falling down on its own responsibility as stewards to our children's future.” He went on to explain why he believes passing bills is just one factor of what needs to be addressed, citing online sports betting as another major concern.“We can't just pass these bills. We've got to stop all of these intrusive addiction-for-profit companies from taking our kids hostage. That's what they're doing. This is a fight,” he said. “And we are losing the fight because we're not out there fighting for our kids to protect them from these businesses [whose] whole profit motive is, ‘How am I going to capture that consumer and lock them in as a consumer?’”Calling out giant social media platforms, in particular, Kennedy went on to say: “We, as a country, have seen these companies and industries take advantage of the addiction-for-profit. Purdue, tobacco. Social media's the next big one. And unfortunately, it's going to have to be litigated. We have to go after the devastating impact that these companies are having on our kids.”Amid these ongoing discussions, here’s what you need to know about the Kids Online Safety Act in light of its reintroduction.What is the Kids Online Safety Act?The Kids Online Safety Act aims to provide further protections for children online related to privacy and mental health concerns exacerbated by social media and excessive Internet use.The bill would create “duty of care,” meaning that tech companies and platform giants would be required to take steps to prevent potentially harmful encounters, such as posts about eating disorders and instances of online bullying, from impacting minors.“A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors... patterns of use that indicate or encourage addiction-like behaviors by minors…” the bill reads.Health organizations including The American Academy of Pediatrics and the American Psychological Association, have pushed Congress to pass KOSA to better protect young people online—and see the bill as a potential way to intervene with the detrimental impact social media and Internet usage in general can have on one’s mental health. Newer versions of the bill have narrowed regulations to apply to limiting “design features” such as notifications, “infinite scrolling or autoplay,” and in-game purchases.It would also allow for more parental tools to manage the privacy settings of a minor, and ideally enable a parent to limit the ability for adults to communicate with their children via online platforms.What is the history of the bill? In 2024, KOSA seemingly had all the right ingredients to pass into law. It had bipartisan support, passed the Senate, and could have been put in front of President Joe Biden, who had indicated he would sign the bill.“There is undeniable evidence that social media and other online platforms contribute to our youth mental health crisis,” President Biden wrote in a statement on July 30, 2024, after KOSA passed the Senate. “Today our children are subjected to a wild west online and our current laws and regulations are insufficient to prevent this. It is past time to act.”Yet, the bill was stalled. House Speaker Mike Johnson cautioned Republicans against rushing to pass the bill.“We’ve got to get it right,” Johnson said in December. “Look, I’m a lifelong advocate of protection of children…and online safety is critically important…but we also have to make sure that we don't open the door for violations of free speech.”The bill received support across both aisles, and has now been endorsed by some of the “Big Tech giants” it aims to regulate, including Elon Musk and X, Microsoft, and Apple.“Apple is pleased to offer our support for the Kids Online Safety Act (KOSA). Everyone has a part to play in keeping kids safe online, and we believe [this] legislation will have a meaningful impact on children’s online safety,” Timothy Powderly, Apple’s senior director of government affairs, said in a statement earlier in May after the bill was reintroduced.But other tech giants, including Facebook and Instagram’s parent Meta, opposed the bill last year. Politico reported that 14 lobbyists employed directly by Meta, as well as outside firms, worked the issue.The bill was reintroduced on May 14 by Republican Sen. Marsha Blackburn and Democratic Sen. Richard Blumenthal, who were joined by Senate Majority Leader John Thune and Senate Minority Leader Chuck Schumer.“Senator Blackburn and I made a promise to parents and young people when we started fighting together for the Kids Online Safety Act—we will make this bill law. There’s undeniable awareness of the destructive harms caused by Big Tech’s exploitative, addictive algorithms, and inescapable momentum for reform,” said Blumenthal in a statement announcing the bill’s reintroduction. “I am grateful to Senators Thune and Schumer for their leadership and to our Senate colleagues for their overwhelming bipartisan support. KOSA is an idea whose time has come—in fact, it’s urgently overdue—and even tech companies like X and Apple are realizing that the status quo is unsustainable.What is the controversy around KOSA?Since KOSA’s first introduction, it’s been the site of controversy over free speech and censorship concerns. In 2024, the American Civil Liberties Union (ACLU) discouraged the passage of KOSA at the Senate level, arguing that the bill violated First Amendment-protected speech.“KOSA compounds nationwide attacks on young peoples’ right to learn and access information, on and offline,” said Jenna Leventoff, senior policy counsel at the ACLU. “As state legislatures and school boards across the country impose book bans and classroom censorship laws, the last thing students and parents need is another act of government censorship deciding which educational resources are appropriate for their families. The House must block this dangerous bill before it’s too late.”Some LGBTQ+ rights groups also opposed KOSA in 2024—arguing that the broadly worded bill could empower state attorneys general to determine what kind of content harms kids. One of the bill’s co-sponsors, Blackburn, has previously said that one of the top issues conservatives need to be aware of is “protecting minor children from the transgender in this culture and that influence.” Calling out social media, Blackburn said “this is where children are being indoctrinated.”Other organizations including Center for Democracy & Technology, New America’s Open Technology Institute, and Fight for the Future joined the ACLU in writing a letter to the House Energy and Commerce Committee in 2024, arguing that the bill would not—as intended—protect children, but instead threaten young people’s privacy and lead to censorship.In response to these concerns, the newly-introduced version of the bill has been negotiated with “several changes to further make clear that KOSA would not censor, limit, or remove any content from the internet, and it does not give the FTC [Federal Trade Commission] or state Attorneys General the power to bring lawsuits over content or speech,” Blumenthal’s statement on the bill reads.Where do things currently stand?Now, KOSA is back where it started—sitting in Congress waiting for support.With its new changes, lawmakers argue that they have heard the concerns of opposing advocates. KOSA still needs support and passage from Congress—and signing from President Donald Trump—in order to pass into law.Trump’s son, Donald Trump Jr., has previously voiced strong support of the bill. “We can protect free speech and our kids at the same time from Big Tech. It's time for House Republicans to pass the Kids Online Safety Act ASAP,” Trump Jr. said on X on Dec. 8, 2024.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Duolingo CEO backtracks on AI push after outcry, says human workers still needed

    What just happened? Another company has learned that going all-in on AI at the expense of human workers might save money, but the backlash from users can outweigh the financial benefits. Language-learning app Duolingo, whose CEO recently said AI would replace contract workers, has reversed course, stating that the company would "continue to hire" humans and support employees.
    At the end of April, Duolingo CEO Luis von Ahn announced plans for the firm to become yet another "AI-first" company, meaning more of the technology being integrated into the platform and the eventual elimination of contract workers.
    Von Ahn said Duolingo would "gradually stop using contractors to do work AI can handle." He added that proficiency with the technology would become part of workers' annual reviews, and new employees would only be hired "if a team cannot automate more of their work."
    The CEO doubled down on his AI praise on the No Priors podcast a week later. He said AI would transform schools as we know them, replacing teachers who would move from instructing students to supervising them as the AI took over traditional teaching duties.
    "I also don't think schools are going to go away because you still need childcare," he added.
    Last month wasn't the first time von Ahn had shown a willingness to replace humans with AI. In January 2024, 10% of Duolingo's contract workers were laid off due to the technology.
    // Related Stories

    Unsurprisingly, von Ahn's push hasn't been well received by both Duolingo's users and the majority of the public. The company tried to address the controversy in an Instagram post that manages to hugely miss the mark. The most liked comment reads, "Call us old-fashioned, but we prefer our lessons to be taught by humans."

     

     
     

     

    View this post on Instagram

     

     
     
     

     
     

     
     
     

     
     

    A post shared by DuolingoIt appears that von Ahn has decided that the bad publicity isn't worth the headache. In a LinkedIn post providing "more context to my vision," von Ahn wrote, "To be clear: I do not see AI as replacing what our employees do. I see it as a tool to accelerate what we do, at the same or better level of quality."
    After companies starting falling over each other in their rush to praise generative AI and use it to replace workers, some are now curbing their enthusiasm.
    Buy now, pay later app Klarna, another firm that went all-in on AI and has let go of thousands of employees as a result, is now hiring humans again after CEO Sebastian Siemiatkowski admitted that its customer service AI chatbots offered a "lower quality" than their fleshy equivalents. Moreover, many people refuse to use a company's services if they are forced to talk to a machine rather than a real person.
    Not every company is backing away from the AI-first pledge. Shopify's CEO told managers last month they must prove an AI can't do the job better than a human before hiring new workers.
    #duolingo #ceo #backtracks #push #after
    Duolingo CEO backtracks on AI push after outcry, says human workers still needed
    What just happened? Another company has learned that going all-in on AI at the expense of human workers might save money, but the backlash from users can outweigh the financial benefits. Language-learning app Duolingo, whose CEO recently said AI would replace contract workers, has reversed course, stating that the company would "continue to hire" humans and support employees. At the end of April, Duolingo CEO Luis von Ahn announced plans for the firm to become yet another "AI-first" company, meaning more of the technology being integrated into the platform and the eventual elimination of contract workers. Von Ahn said Duolingo would "gradually stop using contractors to do work AI can handle." He added that proficiency with the technology would become part of workers' annual reviews, and new employees would only be hired "if a team cannot automate more of their work." The CEO doubled down on his AI praise on the No Priors podcast a week later. He said AI would transform schools as we know them, replacing teachers who would move from instructing students to supervising them as the AI took over traditional teaching duties. "I also don't think schools are going to go away because you still need childcare," he added. Last month wasn't the first time von Ahn had shown a willingness to replace humans with AI. In January 2024, 10% of Duolingo's contract workers were laid off due to the technology. // Related Stories Unsurprisingly, von Ahn's push hasn't been well received by both Duolingo's users and the majority of the public. The company tried to address the controversy in an Instagram post that manages to hugely miss the mark. The most liked comment reads, "Call us old-fashioned, but we prefer our lessons to be taught by humans."         View this post on Instagram                       A post shared by DuolingoIt appears that von Ahn has decided that the bad publicity isn't worth the headache. In a LinkedIn post providing "more context to my vision," von Ahn wrote, "To be clear: I do not see AI as replacing what our employees do. I see it as a tool to accelerate what we do, at the same or better level of quality." After companies starting falling over each other in their rush to praise generative AI and use it to replace workers, some are now curbing their enthusiasm. Buy now, pay later app Klarna, another firm that went all-in on AI and has let go of thousands of employees as a result, is now hiring humans again after CEO Sebastian Siemiatkowski admitted that its customer service AI chatbots offered a "lower quality" than their fleshy equivalents. Moreover, many people refuse to use a company's services if they are forced to talk to a machine rather than a real person. Not every company is backing away from the AI-first pledge. Shopify's CEO told managers last month they must prove an AI can't do the job better than a human before hiring new workers. #duolingo #ceo #backtracks #push #after
    Duolingo CEO backtracks on AI push after outcry, says human workers still needed
    www.techspot.com
    What just happened? Another company has learned that going all-in on AI at the expense of human workers might save money, but the backlash from users can outweigh the financial benefits. Language-learning app Duolingo, whose CEO recently said AI would replace contract workers, has reversed course, stating that the company would "continue to hire" humans and support employees. At the end of April, Duolingo CEO Luis von Ahn announced plans for the firm to become yet another "AI-first" company, meaning more of the technology being integrated into the platform and the eventual elimination of contract workers. Von Ahn said Duolingo would "gradually stop using contractors to do work AI can handle." He added that proficiency with the technology would become part of workers' annual reviews, and new employees would only be hired "if a team cannot automate more of their work." The CEO doubled down on his AI praise on the No Priors podcast a week later. He said AI would transform schools as we know them, replacing teachers who would move from instructing students to supervising them as the AI took over traditional teaching duties. "I also don't think schools are going to go away because you still need childcare," he added. Last month wasn't the first time von Ahn had shown a willingness to replace humans with AI. In January 2024, 10% of Duolingo's contract workers were laid off due to the technology. // Related Stories Unsurprisingly, von Ahn's push hasn't been well received by both Duolingo's users and the majority of the public. The company tried to address the controversy in an Instagram post that manages to hugely miss the mark. The most liked comment reads, "Call us old-fashioned, but we prefer our lessons to be taught by humans."         View this post on Instagram                       A post shared by Duolingo (@duolingo) It appears that von Ahn has decided that the bad publicity isn't worth the headache. In a LinkedIn post providing "more context to my vision," von Ahn wrote, "To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality." After companies starting falling over each other in their rush to praise generative AI and use it to replace workers, some are now curbing their enthusiasm. Buy now, pay later app Klarna, another firm that went all-in on AI and has let go of thousands of employees as a result, is now hiring humans again after CEO Sebastian Siemiatkowski admitted that its customer service AI chatbots offered a "lower quality" than their fleshy equivalents. Moreover, many people refuse to use a company's services if they are forced to talk to a machine rather than a real person. Not every company is backing away from the AI-first pledge. Shopify's CEO told managers last month they must prove an AI can't do the job better than a human before hiring new workers.
    0 Комментарии ·0 Поделились ·0 предпросмотр
CGShares https://cgshares.com