• Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

    There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March. 

    These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

    China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

    For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

    As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

    Set the standard

    It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

    Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

    Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

    “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

    In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

    Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue.

    Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

    What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

    A global address

    Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

    Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

    But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

    Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

    But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

    An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

    Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

    For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

    A super‑app approach

    Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

    ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

    Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

    Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

    Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

    Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

    But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

    Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

    ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

    “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

    Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    #manus #has #kickstarted #agent #boom
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.” #manus #has #kickstarted #agent #boom
    WWW.TECHNOLOGYREVIEW.COM
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    Like
    Love
    Wow
    Sad
    Angry
    421
    0 التعليقات 0 المشاركات
  • Inside The AI-Powered Modeling Agency Boom — And What Comes Next

    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty
    The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out.

    In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate.

    That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale.

    FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process.

    As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.”
    That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place?
    Automation In Digital Model Agencies
    To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators.
    These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.”

    That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds.
    Tyron Humphris, founder of FanProFanPro
    But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.”
    And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents.
    FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation.
    A Necessary Human Touch
    Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty.
    A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand.
    Humphris doesn’t disagree.
    “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.”
    Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire.
    Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success.
    What’s Working — And What’s Not
    Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows.
    But not all reactions have been positive.
    In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn.
    As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it.
    Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted.
    Automation In The New Era
    FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies.
    Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses.
    As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place.
    “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.”
    #inside #aipowered #modeling #agency #boom
    Inside The AI-Powered Modeling Agency Boom — And What Comes Next
    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out. In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate. That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale. FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process. As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.” That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place? Automation In Digital Model Agencies To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators. These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.” That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds. Tyron Humphris, founder of FanProFanPro But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.” And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents. FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation. A Necessary Human Touch Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty. A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand. Humphris doesn’t disagree. “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.” Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire. Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success. What’s Working — And What’s Not Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows. But not all reactions have been positive. In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn. As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it. Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted. Automation In The New Era FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies. Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses. As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place. “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.” #inside #aipowered #modeling #agency #boom
    WWW.FORBES.COM
    Inside The AI-Powered Modeling Agency Boom — And What Comes Next
    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out. In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate. That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale. FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached $1 million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process. As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.” That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place? Automation In Digital Model Agencies To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators. These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.” That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds. Tyron Humphris, founder of FanProFanPro But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.” And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents. FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation. A Necessary Human Touch Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty. A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand. Humphris doesn’t disagree. “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.” Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire. Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success. What’s Working — And What’s Not Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows. But not all reactions have been positive. In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn. As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it. Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted. Automation In The New Era FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies. Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses. As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place. “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    4 التعليقات 0 المشاركات
  • What Strava Buying 'The Breakaway' App Means for Its Users

    We may earn a commission from links on this page.It looks like Strava is making moves to become more than just a social fitness tracker. The popular fitness app—arguably the best one of its kind—announced Thursday that it has acquired The Breakaway, an AI-powered cycling training app, marking its second major acquisition in just over a month.This follows Strava's purchase of Runna back in April. So, what do these acquisitions mean for users of The Breakaway and Strava alike? Will those apps' specific training plans become available a part of the Strava subscription? Will I have to pay for that whether I like it or not? Here's what you need to know.What The Breakaway brings to StravaThe Breakaway uses AI to create customized training plans for cyclists pursuing specific performance goals. The app analyzes individual fitness data and objectives to generate workouts tailored to each user's needs and schedule. Similarly, Runna offers AI-generated training plans, but focuses on runners rather than cyclists. As people are speculating on Reddit, these apps could represent Strava's strategic push into more personalized training and coaching features.Zooming out, Strava has built its reputation on social fitness tracking. As a loyal Strava user myself, I believe no other running app can beat Strava's social and mapping features. This ability to tap into a community of fellow runners and cyclists has always differentiated Strava from pure tracking apps.Strava's core offering has remained relativelybasic compared to specialized training apps. That said, these acquisitions sure do suggest the company wants to capture more and more of the fitness ecosystem by offering the kind of structured, goal-oriented training that serious athletes need.What this means for pricingCurrent subscribers don't need to worry about immediate price hikes. The Breakaway costs /month, or /year.Strava's free tier lets you post your runs, interact with other users, and track some basic statistics about your performance. The premium tier, at /month or /year, gives you extra performance tracking and mapping tools.And according to statements from Strava, there are no plans to alter pricing structures or eliminate free access to the acquired apps' basic features. Whether this pricing structure will hold long-term remains to be seen, especially as Strava integrates these services into its broader platform.The bottom lineRather than users needing separate apps for social tracking and structured training, Strava appears to be building an all-in-one fitness ecosystem. Even for the most casual users, this could mean access to more training tools without leaving the Strava ecosystem. But as some disgruntled fans are voicing, it can be frustrating to see Strava scoop up AI-powered training features, rather than fix some of its most basic issues.And we can only hope that pricing doesn't get too crazy. We'll see whether users are willing to pay more for what has traditionally been a social-first fitness app. Finally, as Strava continues to expand its feature set, it's worth remembering that the app defaults to public sharing. Regularly review your privacy settings to ensure you're not inadvertently sharing location data or personal information more broadly than intended.
    #what #strava #buying #039the #breakaway039
    What Strava Buying 'The Breakaway' App Means for Its Users
    We may earn a commission from links on this page.It looks like Strava is making moves to become more than just a social fitness tracker. The popular fitness app—arguably the best one of its kind—announced Thursday that it has acquired The Breakaway, an AI-powered cycling training app, marking its second major acquisition in just over a month.This follows Strava's purchase of Runna back in April. So, what do these acquisitions mean for users of The Breakaway and Strava alike? Will those apps' specific training plans become available a part of the Strava subscription? Will I have to pay for that whether I like it or not? Here's what you need to know.What The Breakaway brings to StravaThe Breakaway uses AI to create customized training plans for cyclists pursuing specific performance goals. The app analyzes individual fitness data and objectives to generate workouts tailored to each user's needs and schedule. Similarly, Runna offers AI-generated training plans, but focuses on runners rather than cyclists. As people are speculating on Reddit, these apps could represent Strava's strategic push into more personalized training and coaching features.Zooming out, Strava has built its reputation on social fitness tracking. As a loyal Strava user myself, I believe no other running app can beat Strava's social and mapping features. This ability to tap into a community of fellow runners and cyclists has always differentiated Strava from pure tracking apps.Strava's core offering has remained relativelybasic compared to specialized training apps. That said, these acquisitions sure do suggest the company wants to capture more and more of the fitness ecosystem by offering the kind of structured, goal-oriented training that serious athletes need.What this means for pricingCurrent subscribers don't need to worry about immediate price hikes. The Breakaway costs /month, or /year.Strava's free tier lets you post your runs, interact with other users, and track some basic statistics about your performance. The premium tier, at /month or /year, gives you extra performance tracking and mapping tools.And according to statements from Strava, there are no plans to alter pricing structures or eliminate free access to the acquired apps' basic features. Whether this pricing structure will hold long-term remains to be seen, especially as Strava integrates these services into its broader platform.The bottom lineRather than users needing separate apps for social tracking and structured training, Strava appears to be building an all-in-one fitness ecosystem. Even for the most casual users, this could mean access to more training tools without leaving the Strava ecosystem. But as some disgruntled fans are voicing, it can be frustrating to see Strava scoop up AI-powered training features, rather than fix some of its most basic issues.And we can only hope that pricing doesn't get too crazy. We'll see whether users are willing to pay more for what has traditionally been a social-first fitness app. Finally, as Strava continues to expand its feature set, it's worth remembering that the app defaults to public sharing. Regularly review your privacy settings to ensure you're not inadvertently sharing location data or personal information more broadly than intended. #what #strava #buying #039the #breakaway039
    LIFEHACKER.COM
    What Strava Buying 'The Breakaway' App Means for Its Users
    We may earn a commission from links on this page.It looks like Strava is making moves to become more than just a social fitness tracker. The popular fitness app—arguably the best one of its kind—announced Thursday that it has acquired The Breakaway, an AI-powered cycling training app, marking its second major acquisition in just over a month.This follows Strava's purchase of Runna back in April. So, what do these acquisitions mean for users of The Breakaway and Strava alike? Will those apps' specific training plans become available a part of the Strava subscription? Will I have to pay for that whether I like it or not? Here's what you need to know.What The Breakaway brings to StravaThe Breakaway uses AI to create customized training plans for cyclists pursuing specific performance goals. The app analyzes individual fitness data and objectives to generate workouts tailored to each user's needs and schedule. Similarly, Runna offers AI-generated training plans, but focuses on runners rather than cyclists. As people are speculating on Reddit, these apps could represent Strava's strategic push into more personalized training and coaching features.Zooming out, Strava has built its reputation on social fitness tracking. As a loyal Strava user myself, I believe no other running app can beat Strava's social and mapping features. This ability to tap into a community of fellow runners and cyclists has always differentiated Strava from pure tracking apps.Strava's core offering has remained relatively (and refreshingly) basic compared to specialized training apps. That said, these acquisitions sure do suggest the company wants to capture more and more of the fitness ecosystem by offering the kind of structured, goal-oriented training that serious athletes need.What this means for pricingCurrent subscribers don't need to worry about immediate price hikes. The Breakaway costs $9.99/month, or $69.99/year. (I guess runners are willing to shell out more, since Runna costs $19.99/month, or $119.99/year.) Strava's free tier lets you post your runs, interact with other users, and track some basic statistics about your performance. The premium tier, at $11.99/month or $79.99/year, gives you extra performance tracking and mapping tools.And according to statements from Strava, there are no plans to alter pricing structures or eliminate free access to the acquired apps' basic features. Whether this pricing structure will hold long-term remains to be seen, especially as Strava integrates these services into its broader platform.The bottom lineRather than users needing separate apps for social tracking and structured training, Strava appears to be building an all-in-one fitness ecosystem. Even for the most casual users, this could mean access to more training tools without leaving the Strava ecosystem. But as some disgruntled fans are voicing, it can be frustrating to see Strava scoop up AI-powered training features, rather than fix some of its most basic issues. (Seriously: I should be able to accurately search for for past runs.)And we can only hope that pricing doesn't get too crazy. We'll see whether users are willing to pay more for what has traditionally been a social-first fitness app. Finally, as Strava continues to expand its feature set, it's worth remembering that the app defaults to public sharing. Regularly review your privacy settings to ensure you're not inadvertently sharing location data or personal information more broadly than intended.
    0 التعليقات 0 المشاركات
  • Remembering the controversial iOS 7 introduction

    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not rememberwhat that overhaul actually looked like.
    So here’s a quick refresher on what happened, and why this year will likelybe different.

    The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstallover the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design.
    Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system.
    Flat
    In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time.
    Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman.

    It was chaos.
    I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7:
    In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again.
    Why iOS 26 probably won’t be like iOS 7
    Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7.

    And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once.
    If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element.
    By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions.
    Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters.
    This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself.

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #remembering #controversial #ios #introduction
    Remembering the controversial iOS 7 introduction
    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not rememberwhat that overhaul actually looked like. So here’s a quick refresher on what happened, and why this year will likelybe different. The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstallover the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design. Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system. Flat In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time. Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman. It was chaos. I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7: In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again. Why iOS 26 probably won’t be like iOS 7 Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7. And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once. If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element. By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions. Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters. This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #remembering #controversial #ios #introduction
    9TO5MAC.COM
    Remembering the controversial iOS 7 introduction
    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not remember (or may have never even seen) what that overhaul actually looked like. So here’s a quick refresher on what happened, and why this year will likely (I mean, hopefully?) be different. The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstall (then SVP of iOS Software) over the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design. Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system. Flat In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time (think linen textures, paper-like folders, glass effects, and yes, Corinthian leather). Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman. It was chaos. I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7: In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again. Why iOS 26 probably won’t be like iOS 7 Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7. And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once. If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element. By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions. Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters. This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 التعليقات 0 المشاركات
  • When did UX & content get so hard?

    Maybe it’s the state of the world, or just the state of my life, but it feels like everything in the world of digital content has gotten more fraught.Photo by Riccardo, PexelsIt’s a weekday morning and I’m sipping coffee, scanning my calendar for my meetings today, preparing my work, swimming in a slog of newsletters, flipping between tabs open to current events in our very anxious, uncertain world, and trying to start my day with a deep breath.​Yet I keep thinking: Why does this feel so hard?I know I don’t have to know it all right now. I’m taking another breath, remembering the words I wrote a couple years ago.It might be hard because things are tough right nowI’ll acknowledge the obvious: The world is a scary place.The pandemic alone brought mental health issues to an all-time high — nearly 41% of U.S. adults experienced “psychological distress” during the pandemic, and since then, it’s been a rolling collection of additional anxieties.There are political upheavals, cultural shifts, and other changes happening every day, every hour, that feel uncertain.Thousands are losing their jobs in America, particularly dedicated civil servants.Diversity, equity, and inclusion practices are being chopped and impacting the future of higher education, government enterprise, and beyond.The identities of millions are being challenged politically.And in between, those of us working in the digital space — websites, development, digital marketing, etc. — are trying to keep up on how to do our jobs and do them well. At least well enough to cut through the noise. At least well enough to help the person on the other side of the screen, whoever that may be and whatever they may need.As I’ve been combing through the news, I find myself getting depressed, anxious, angry. I don’t have a lot of influence individually of what I can change but I can join voices in my community, write letters to my representatives, and keep voting for the values that align with me and protect others.In your circle of influence, you can control your health, your mind, and how you show up for those around you. Focus on that and try to remember you’re Just Human.It might be hard because technology is changing how we do our jobsArtificial Intelligencehas been around for a long time. In truth, we’ve used it in many forms over the years, from search engines to our phone voice assistants and more.But to my rattled brain, it feels like I woke up one morning and AI was everywhere and it was the only way forward to do our job, and gosh darnit if we don’t use it, we’re in trouble.One day I was just a content strategist, humming along, doing what I’ve done for 15 years. And then suddenly I need a robot to do it better. Yet some research tells us using AI makes us lonelier and makes work less enjoyable in some ways.Don’t get me wrong: AI has its place. I’ve found it incredibly useful for content editing, tightening, formatting content for HTML.But it’s new and I’m learning. So that’s OK, right?Apparently not. The speed at which AI is adopted and expected to be used is quite frankly, startling. Browse any job listing on LinkedIn and you’ll see AI and AI tools as part of the requirements for job.My advice: Learn what you want, at the pace that’s comfortable. You can’t learn it all today, or tomorrow. You can only learn a little bit at a time.Remember learning to read? Me neither. But I can assure you it wasn’t in a day or two. It took years. Just like learning to write in cursive took practice. And riding my bike took some falls.It took time and patience. We have the right to exercise that now, as grown-ups. So take your time. Say to yourself, “Let me try,” and dismiss the voice over your shoulder or in your head telling you to go faster.It might be hard because we’re taking this…too seriously?Hear me out: In a world that’s so deadly serious, it seems we’re bringing that heaviness into how we do our work.One thing you’ll never hear me call my work as a content strategist: Referring to it as a vocation. While I love the work I do, it’s not all I’m meant to be.I like to tell people: I work to make websites better. The end. I do that by:talking to real peopleunearthing challenges and opportunitiesemploying useful, approachable strategies to make user experience betterbuilding website navigation and architecture that connects pages and information in meaningful waysteaching accessibility, inclusionary content, and the value of making information easy to read and understand for all people​There’s more, of course, but you get the gist. And I’m one tiny fish in a sea of people who do this and do it well.But a quick scan of my inbox newsletters, LinkedIn posts, any other articles about user experience, and you’ll be bombarded with a five alarm fire of what we all need to be doing better, pushing harder, hustling, self-publishing, and learning All The Things.A quick tip: You don’t have to run a four-minute mile. Take your time. Time a breath. Walk, don’t run. Focus on what you do well, and identify things you want to learn now, and make time to practice them. Don’t drown in the overload.It might be hard because we’re being too hard on ourselvesDo something with me. Stop reading, stop thinking. Follow this instruction:Close your eyes.Take a deep breath in. Count to four. Hold for a count of three. Release for a count of four.In…1, 2, 3, 4…hold…Out…1, 2, 3, 4…We’re trying to keep up: At work, at home, everywhere in between. There are chores to be done, tasks to be completed, people to stay in touch with, events to attend, and somehow still need to squeeze in a restful night’s sleep.We’re going too fast and too hard.As I recognize this in myself, I’ve been exercising the right to say ‘no’ to things I can’t prioritize. I’ve been putting my phone down and in another room so I can pick up my embroidery or crochet hook and do something analog.I make time at the end of the night to put the dishes away, tidy the living room, clear my office desk.And I’ve been making time for the people and things that make time for me, who reach out and say “Let’s get together and have a laugh.”And for goodness sake, please find a way to laugh.That alone may be a tall order in a world of chaos right now. But as the great Kurt Vonnegut once said, “I’d rather laugh than cry. There’s less cleaning up to do afterward.”Slow down. When you have a moment of free time, don’t ask what you should be doing. Ask what you want to do.And at the end of the day, pat yourself on the back. “You made it another day,” you can say quietly to your rattled brain as you wind down for the evening. “Good job, you.”When did UX & content get so hard? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #when #did #ampamp #content #get
    When did UX & content get so hard?
    Maybe it’s the state of the world, or just the state of my life, but it feels like everything in the world of digital content has gotten more fraught.Photo by Riccardo, PexelsIt’s a weekday morning and I’m sipping coffee, scanning my calendar for my meetings today, preparing my work, swimming in a slog of newsletters, flipping between tabs open to current events in our very anxious, uncertain world, and trying to start my day with a deep breath.​Yet I keep thinking: Why does this feel so hard?I know I don’t have to know it all right now. I’m taking another breath, remembering the words I wrote a couple years ago.It might be hard because things are tough right nowI’ll acknowledge the obvious: The world is a scary place.The pandemic alone brought mental health issues to an all-time high — nearly 41% of U.S. adults experienced “psychological distress” during the pandemic, and since then, it’s been a rolling collection of additional anxieties.There are political upheavals, cultural shifts, and other changes happening every day, every hour, that feel uncertain.Thousands are losing their jobs in America, particularly dedicated civil servants.Diversity, equity, and inclusion practices are being chopped and impacting the future of higher education, government enterprise, and beyond.The identities of millions are being challenged politically.And in between, those of us working in the digital space — websites, development, digital marketing, etc. — are trying to keep up on how to do our jobs and do them well. At least well enough to cut through the noise. At least well enough to help the person on the other side of the screen, whoever that may be and whatever they may need.As I’ve been combing through the news, I find myself getting depressed, anxious, angry. I don’t have a lot of influence individually of what I can change but I can join voices in my community, write letters to my representatives, and keep voting for the values that align with me and protect others.In your circle of influence, you can control your health, your mind, and how you show up for those around you. Focus on that and try to remember you’re Just Human.It might be hard because technology is changing how we do our jobsArtificial Intelligencehas been around for a long time. In truth, we’ve used it in many forms over the years, from search engines to our phone voice assistants and more.But to my rattled brain, it feels like I woke up one morning and AI was everywhere and it was the only way forward to do our job, and gosh darnit if we don’t use it, we’re in trouble.One day I was just a content strategist, humming along, doing what I’ve done for 15 years. And then suddenly I need a robot to do it better. Yet some research tells us using AI makes us lonelier and makes work less enjoyable in some ways.Don’t get me wrong: AI has its place. I’ve found it incredibly useful for content editing, tightening, formatting content for HTML.But it’s new and I’m learning. So that’s OK, right?Apparently not. The speed at which AI is adopted and expected to be used is quite frankly, startling. Browse any job listing on LinkedIn and you’ll see AI and AI tools as part of the requirements for job.My advice: Learn what you want, at the pace that’s comfortable. You can’t learn it all today, or tomorrow. You can only learn a little bit at a time.Remember learning to read? Me neither. But I can assure you it wasn’t in a day or two. It took years. Just like learning to write in cursive took practice. And riding my bike took some falls.It took time and patience. We have the right to exercise that now, as grown-ups. So take your time. Say to yourself, “Let me try,” and dismiss the voice over your shoulder or in your head telling you to go faster.It might be hard because we’re taking this…too seriously?Hear me out: In a world that’s so deadly serious, it seems we’re bringing that heaviness into how we do our work.One thing you’ll never hear me call my work as a content strategist: Referring to it as a vocation. While I love the work I do, it’s not all I’m meant to be.I like to tell people: I work to make websites better. The end. I do that by:talking to real peopleunearthing challenges and opportunitiesemploying useful, approachable strategies to make user experience betterbuilding website navigation and architecture that connects pages and information in meaningful waysteaching accessibility, inclusionary content, and the value of making information easy to read and understand for all people​There’s more, of course, but you get the gist. And I’m one tiny fish in a sea of people who do this and do it well.But a quick scan of my inbox newsletters, LinkedIn posts, any other articles about user experience, and you’ll be bombarded with a five alarm fire of what we all need to be doing better, pushing harder, hustling, self-publishing, and learning All The Things.A quick tip: You don’t have to run a four-minute mile. Take your time. Time a breath. Walk, don’t run. Focus on what you do well, and identify things you want to learn now, and make time to practice them. Don’t drown in the overload.It might be hard because we’re being too hard on ourselvesDo something with me. Stop reading, stop thinking. Follow this instruction:Close your eyes.Take a deep breath in. Count to four. Hold for a count of three. Release for a count of four.In…1, 2, 3, 4…hold…Out…1, 2, 3, 4…We’re trying to keep up: At work, at home, everywhere in between. There are chores to be done, tasks to be completed, people to stay in touch with, events to attend, and somehow still need to squeeze in a restful night’s sleep.We’re going too fast and too hard.As I recognize this in myself, I’ve been exercising the right to say ‘no’ to things I can’t prioritize. I’ve been putting my phone down and in another room so I can pick up my embroidery or crochet hook and do something analog.I make time at the end of the night to put the dishes away, tidy the living room, clear my office desk.And I’ve been making time for the people and things that make time for me, who reach out and say “Let’s get together and have a laugh.”And for goodness sake, please find a way to laugh.That alone may be a tall order in a world of chaos right now. But as the great Kurt Vonnegut once said, “I’d rather laugh than cry. There’s less cleaning up to do afterward.”Slow down. When you have a moment of free time, don’t ask what you should be doing. Ask what you want to do.And at the end of the day, pat yourself on the back. “You made it another day,” you can say quietly to your rattled brain as you wind down for the evening. “Good job, you.”When did UX & content get so hard? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #when #did #ampamp #content #get
    UXDESIGN.CC
    When did UX & content get so hard?
    Maybe it’s the state of the world, or just the state of my life, but it feels like everything in the world of digital content has gotten more fraught.Photo by Riccardo, PexelsIt’s a weekday morning and I’m sipping coffee, scanning my calendar for my meetings today, preparing my work, swimming in a slog of newsletters, flipping between tabs open to current events in our very anxious, uncertain world, and trying to start my day with a deep breath.​Yet I keep thinking: Why does this feel so hard?I know I don’t have to know it all right now. I’m taking another breath, remembering the words I wrote a couple years ago.It might be hard because things are tough right nowI’ll acknowledge the obvious: The world is a scary place.The pandemic alone brought mental health issues to an all-time high — nearly 41% of U.S. adults experienced “psychological distress” during the pandemic, and since then, it’s been a rolling collection of additional anxieties.There are political upheavals, cultural shifts, and other changes happening every day, every hour, that feel uncertain.Thousands are losing their jobs in America, particularly dedicated civil servants.Diversity, equity, and inclusion practices are being chopped and impacting the future of higher education, government enterprise, and beyond.The identities of millions are being challenged politically.And in between, those of us working in the digital space — websites, development, digital marketing, etc. — are trying to keep up on how to do our jobs and do them well. At least well enough to cut through the noise. At least well enough to help the person on the other side of the screen, whoever that may be and whatever they may need.As I’ve been combing through the news, I find myself getting depressed, anxious, angry. I don’t have a lot of influence individually of what I can change but I can join voices in my community, write letters to my representatives, and keep voting for the values that align with me and protect others.In your circle of influence, you can control your health, your mind, and how you show up for those around you. Focus on that and try to remember you’re Just Human.It might be hard because technology is changing how we do our jobsArtificial Intelligence (AI) has been around for a long time. In truth, we’ve used it in many forms over the years, from search engines to our phone voice assistants and more.But to my rattled brain, it feels like I woke up one morning and AI was everywhere and it was the only way forward to do our job, and gosh darnit if we don’t use it, we’re in trouble.One day I was just a content strategist, humming along, doing what I’ve done for 15 years. And then suddenly I need a robot to do it better. Yet some research tells us using AI makes us lonelier and makes work less enjoyable in some ways.Don’t get me wrong: AI has its place. I’ve found it incredibly useful for content editing, tightening, formatting content for HTML.But it’s new and I’m learning. So that’s OK, right?Apparently not. The speed at which AI is adopted and expected to be used is quite frankly, startling. Browse any job listing on LinkedIn and you’ll see AI and AI tools as part of the requirements for job.My advice: Learn what you want, at the pace that’s comfortable. You can’t learn it all today, or tomorrow. You can only learn a little bit at a time.Remember learning to read? Me neither. But I can assure you it wasn’t in a day or two. It took years. Just like learning to write in cursive took practice. And riding my bike took some falls.It took time and patience. We have the right to exercise that now, as grown-ups. So take your time. Say to yourself, “Let me try,” and dismiss the voice over your shoulder or in your head telling you to go faster.It might be hard because we’re taking this…too seriously?Hear me out: In a world that’s so deadly serious (no pun intended), it seems we’re bringing that heaviness into how we do our work.One thing you’ll never hear me call my work as a content strategist: Referring to it as a vocation. While I love the work I do, it’s not all I’m meant to be.I like to tell people: I work to make websites better. The end. I do that by:talking to real peopleunearthing challenges and opportunitiesemploying useful, approachable strategies to make user experience betterbuilding website navigation and architecture that connects pages and information in meaningful waysteaching accessibility, inclusionary content, and the value of making information easy to read and understand for all people​There’s more, of course, but you get the gist. And I’m one tiny fish in a sea of people who do this and do it well.But a quick scan of my inbox newsletters, LinkedIn posts, any other articles about user experience, and you’ll be bombarded with a five alarm fire of what we all need to be doing better, pushing harder, hustling, self-publishing, and learning All The Things.A quick tip: You don’t have to run a four-minute mile. Take your time. Time a breath. Walk, don’t run. Focus on what you do well, and identify things you want to learn now, and make time to practice them. Don’t drown in the overload.It might be hard because we’re being too hard on ourselvesDo something with me. Stop reading, stop thinking. Follow this instruction:Close your eyes.Take a deep breath in. Count to four. Hold for a count of three. Release for a count of four.In…1, 2, 3, 4…hold…Out…1, 2, 3, 4…We’re trying to keep up: At work, at home, everywhere in between. There are chores to be done, tasks to be completed, people to stay in touch with, events to attend, and somehow still need to squeeze in a restful night’s sleep.We’re going too fast and too hard.As I recognize this in myself, I’ve been exercising the right to say ‘no’ to things I can’t prioritize. I’ve been putting my phone down and in another room so I can pick up my embroidery or crochet hook and do something analog.I make time at the end of the night to put the dishes away, tidy the living room, clear my office desk.And I’ve been making time for the people and things that make time for me, who reach out and say “Let’s get together and have a laugh.”And for goodness sake, please find a way to laugh.That alone may be a tall order in a world of chaos right now. But as the great Kurt Vonnegut once said, “I’d rather laugh than cry. There’s less cleaning up to do afterward.”Slow down. When you have a moment of free time, don’t ask what you should be doing. Ask what you want to do.And at the end of the day, pat yourself on the back. “You made it another day,” you can say quietly to your rattled brain as you wind down for the evening. “Good job, you.”When did UX & content get so hard? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 التعليقات 0 المشاركات
  • The Handmaid’s Tale Season 6 Finale Review: The Handmaid’s Tale

    Warning: contains spoilers for The Handmaid’s Tale series finale.
    Finale? More like DVD Extra. The cast of a once-unmissable show reunited one last time for a series of watery-eyed goodbyes and I love yous. 55 minutes of June trundling around a recently liberated Boston remembering things and having feelings? The Handmaid’s Tale hasn’t delivered a more inessential episode since the ‘What Luke Did” flashback in season one.
    You know what’s to blame: therapy. It’s taught us concepts like ‘processing trauma’ and ‘closure’ – both useful in their context but ruinous when mistaken for storytelling. Real lives may benefit from being lived with wisdom, growth and acceptance, but fictional ones can afford more chaos. Characters don’t all need to bow out of their story with instructive understanding; some should be allowed to kick their way out pulling a grenade pin between their teeth. 

    The Handmaid’s Tale made its name as protest art with iconic imagery, a killer soundtrack and attitude to spare. It could have sent June thundering into the flames, but instead, she got this weepy valedictory tour. 

    A beautifully acted weepy valedictory tour, one should say. The cast of The Handmaid’s Tale never let you down, but on rare occasions like this one, they’re let down by writing that cares more about completing its characters’ emotions worksheets than about entertaining an audience. Don’t mistake me, I’m pleased that June had all of those repetitive reunions – with Serena, with Emily, with Luke, with baby Holly, with her mother, with Lydia, with Serena again… I just don’t feel like I needed to witness ‘em. How about some story instead? Why not let us see, say, Hannah in wartime?
    Why not is because that’s all being saved, along with Aunt Lydia’s next steps, for sequel The Testaments, a continuation that this episode dutifully set up without managing to raise much anticipation for.
    The series finale wasn’t about looking forward, it was all about looking back. Hence the surprise return of Alexis Bledel’s Emily, who showed up magically at June’s side with a callback to the start of their tentative friendship in season one. Emily was just one of a rollcall of faces from the past. Those also came in the form of cameos from departed friends Alma, Brianna, and Janine’s right eye, as June fantasised about the karaoke night that might have been. 
    The episode’s closing moments, in which June revisited the Waterford house burnt out by Serena in season three, were another callback. June took up the same window seat position as she had in episode one and delivered the same opening lines to the Margaret Atwood novel that started all this. Except, now those lines were the opening lines to June’s memoir, bringing the show metatextually full circle. 
    Nothing in the finale mattered so much as its heavily insisted-upon message, which was all about parents fighting to create a better world to keep their children safe. June readied herself to leave little Holly again, bolstered by Emily’s assurance that it didn’t mean she was abandoning her family. Luke planned to reach Hannah by liberating one state from Gilead at a time. Naomi Lawrence returned little Charlotte to her mother to keep her out of a warzone. Even Mark Tuello was conjured up an off-screen son to motivate his military moves.
    By the time Holly Sr had declaimed over not being able to keep June safe, and Serena had promised to dedicate herself solely to the raising of her precious baby Noah, it was hard not to feel a little Gilead propaganda going on in terms of children being the only reason that anybody does anything. I don’t recall that being the point Margaret Atwood was making back in 1985.

    Nor was the finale’s ultra-serious, highly emotive tone always the way of things in The Handmaid’s Tale. June’s irreverence, not to mention her excellent way with an expletive, is part of what’s made her an attractive lead character over the years. Next to Gilead’s mannered prayer-card conversational style, she’s been a breath of fresh air. In this finale though, June’s wryness was replaced with her telling Serena to “go in grace” like she was issuing a papal blessing, and telling little Holly all about how much mommies love their babies.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    There were flashes of beauty among the sap. The shot of June walking back along the bridge as Boston’s lights turned on was terrific both in idea and execution. Janine getting Charlotte back was a genuine – if unexplored – surprise. “The Wall” being co-opted by revolutionary graffiti and women reclaiming their own names was gorgeous.
    Overall though, this was a repetitive and surplus hour that used its screentime to remind us of things that didn’t really require a reminder. June misses Hannah. June once loved Nick. Serena feels bad. The children are our future. We know. You already told us. 

    The Handmaid’s Tale season six is streaming now on Hulu in the US, and airing weekly on Channel 4 in the UK. 
    #handmaids #tale #season #finale #review
    The Handmaid’s Tale Season 6 Finale Review: The Handmaid’s Tale
    Warning: contains spoilers for The Handmaid’s Tale series finale. Finale? More like DVD Extra. The cast of a once-unmissable show reunited one last time for a series of watery-eyed goodbyes and I love yous. 55 minutes of June trundling around a recently liberated Boston remembering things and having feelings? The Handmaid’s Tale hasn’t delivered a more inessential episode since the ‘What Luke Did” flashback in season one. You know what’s to blame: therapy. It’s taught us concepts like ‘processing trauma’ and ‘closure’ – both useful in their context but ruinous when mistaken for storytelling. Real lives may benefit from being lived with wisdom, growth and acceptance, but fictional ones can afford more chaos. Characters don’t all need to bow out of their story with instructive understanding; some should be allowed to kick their way out pulling a grenade pin between their teeth.  The Handmaid’s Tale made its name as protest art with iconic imagery, a killer soundtrack and attitude to spare. It could have sent June thundering into the flames, but instead, she got this weepy valedictory tour.  A beautifully acted weepy valedictory tour, one should say. The cast of The Handmaid’s Tale never let you down, but on rare occasions like this one, they’re let down by writing that cares more about completing its characters’ emotions worksheets than about entertaining an audience. Don’t mistake me, I’m pleased that June had all of those repetitive reunions – with Serena, with Emily, with Luke, with baby Holly, with her mother, with Lydia, with Serena again… I just don’t feel like I needed to witness ‘em. How about some story instead? Why not let us see, say, Hannah in wartime? Why not is because that’s all being saved, along with Aunt Lydia’s next steps, for sequel The Testaments, a continuation that this episode dutifully set up without managing to raise much anticipation for. The series finale wasn’t about looking forward, it was all about looking back. Hence the surprise return of Alexis Bledel’s Emily, who showed up magically at June’s side with a callback to the start of their tentative friendship in season one. Emily was just one of a rollcall of faces from the past. Those also came in the form of cameos from departed friends Alma, Brianna, and Janine’s right eye, as June fantasised about the karaoke night that might have been.  The episode’s closing moments, in which June revisited the Waterford house burnt out by Serena in season three, were another callback. June took up the same window seat position as she had in episode one and delivered the same opening lines to the Margaret Atwood novel that started all this. Except, now those lines were the opening lines to June’s memoir, bringing the show metatextually full circle.  Nothing in the finale mattered so much as its heavily insisted-upon message, which was all about parents fighting to create a better world to keep their children safe. June readied herself to leave little Holly again, bolstered by Emily’s assurance that it didn’t mean she was abandoning her family. Luke planned to reach Hannah by liberating one state from Gilead at a time. Naomi Lawrence returned little Charlotte to her mother to keep her out of a warzone. Even Mark Tuello was conjured up an off-screen son to motivate his military moves. By the time Holly Sr had declaimed over not being able to keep June safe, and Serena had promised to dedicate herself solely to the raising of her precious baby Noah, it was hard not to feel a little Gilead propaganda going on in terms of children being the only reason that anybody does anything. I don’t recall that being the point Margaret Atwood was making back in 1985. Nor was the finale’s ultra-serious, highly emotive tone always the way of things in The Handmaid’s Tale. June’s irreverence, not to mention her excellent way with an expletive, is part of what’s made her an attractive lead character over the years. Next to Gilead’s mannered prayer-card conversational style, she’s been a breath of fresh air. In this finale though, June’s wryness was replaced with her telling Serena to “go in grace” like she was issuing a papal blessing, and telling little Holly all about how much mommies love their babies. Join our mailing list Get the best of Den of Geek delivered right to your inbox! There were flashes of beauty among the sap. The shot of June walking back along the bridge as Boston’s lights turned on was terrific both in idea and execution. Janine getting Charlotte back was a genuine – if unexplored – surprise. “The Wall” being co-opted by revolutionary graffiti and women reclaiming their own names was gorgeous. Overall though, this was a repetitive and surplus hour that used its screentime to remind us of things that didn’t really require a reminder. June misses Hannah. June once loved Nick. Serena feels bad. The children are our future. We know. You already told us.  The Handmaid’s Tale season six is streaming now on Hulu in the US, and airing weekly on Channel 4 in the UK.  #handmaids #tale #season #finale #review
    WWW.DENOFGEEK.COM
    The Handmaid’s Tale Season 6 Finale Review: The Handmaid’s Tale
    Warning: contains spoilers for The Handmaid’s Tale series finale. Finale? More like DVD Extra. The cast of a once-unmissable show reunited one last time for a series of watery-eyed goodbyes and I love yous. 55 minutes of June trundling around a recently liberated Boston remembering things and having feelings? The Handmaid’s Tale hasn’t delivered a more inessential episode since the ‘What Luke Did” flashback in season one. You know what’s to blame: therapy. It’s taught us concepts like ‘processing trauma’ and ‘closure’ – both useful in their context but ruinous when mistaken for storytelling. Real lives may benefit from being lived with wisdom, growth and acceptance, but fictional ones can afford more chaos. Characters don’t all need to bow out of their story with instructive understanding; some should be allowed to kick their way out pulling a grenade pin between their teeth.  The Handmaid’s Tale made its name as protest art with iconic imagery, a killer soundtrack and attitude to spare. It could have sent June thundering into the flames, but instead, she got this weepy valedictory tour.  A beautifully acted weepy valedictory tour, one should say. The cast of The Handmaid’s Tale never let you down, but on rare occasions like this one, they’re let down by writing that cares more about completing its characters’ emotions worksheets than about entertaining an audience. Don’t mistake me, I’m pleased that June had all of those repetitive reunions – with Serena, with Emily, with Luke, with baby Holly, with her mother, with Lydia, with Serena again… I just don’t feel like I needed to witness ‘em. How about some story instead? Why not let us see, say, Hannah in wartime? Why not is because that’s all being saved, along with Aunt Lydia’s next steps, for sequel The Testaments, a continuation that this episode dutifully set up without managing to raise much anticipation for. The series finale wasn’t about looking forward, it was all about looking back. Hence the surprise return of Alexis Bledel’s Emily, who showed up magically at June’s side with a callback to the start of their tentative friendship in season one. Emily was just one of a rollcall of faces from the past. Those also came in the form of cameos from departed friends Alma, Brianna, and Janine’s right eye, as June fantasised about the karaoke night that might have been.  The episode’s closing moments, in which June revisited the Waterford house burnt out by Serena in season three, were another callback. June took up the same window seat position as she had in episode one and delivered the same opening lines to the Margaret Atwood novel that started all this. Except, now those lines were the opening lines to June’s memoir, bringing the show metatextually full circle.  Nothing in the finale mattered so much as its heavily insisted-upon message, which was all about parents fighting to create a better world to keep their children safe. June readied herself to leave little Holly again, bolstered by Emily’s assurance that it didn’t mean she was abandoning her family. Luke planned to reach Hannah by liberating one state from Gilead at a time. Naomi Lawrence returned little Charlotte to her mother to keep her out of a warzone. Even Mark Tuello was conjured up an off-screen son to motivate his military moves. By the time Holly Sr had declaimed over not being able to keep June safe, and Serena had promised to dedicate herself solely to the raising of her precious baby Noah, it was hard not to feel a little Gilead propaganda going on in terms of children being the only reason that anybody does anything. I don’t recall that being the point Margaret Atwood was making back in 1985. Nor was the finale’s ultra-serious, highly emotive tone always the way of things in The Handmaid’s Tale. June’s irreverence, not to mention her excellent way with an expletive, is part of what’s made her an attractive lead character over the years. Next to Gilead’s mannered prayer-card conversational style, she’s been a breath of fresh air. In this finale though, June’s wryness was replaced with her telling Serena to “go in grace” like she was issuing a papal blessing, and telling little Holly all about how much mommies love their babies. Join our mailing list Get the best of Den of Geek delivered right to your inbox! There were flashes of beauty among the sap. The shot of June walking back along the bridge as Boston’s lights turned on was terrific both in idea and execution. Janine getting Charlotte back was a genuine – if unexplored – surprise. “The Wall” being co-opted by revolutionary graffiti and women reclaiming their own names was gorgeous. Overall though, this was a repetitive and surplus hour that used its screentime to remind us of things that didn’t really require a reminder. June misses Hannah. June once loved Nick. Serena feels bad. The children are our future. We know. You already told us.  The Handmaid’s Tale season six is streaming now on Hulu in the US, and airing weekly on Channel 4 in the UK. 
    8 التعليقات 0 المشاركات
  • Tim Cook marks Memorial Day with rememberance, and a message of gratitude

    Apple CEO Tim Cook has marked Memorial Day with a social media post remembering and thanking fallen heroes.Apple CEO Tim CookTim Cook habitually posts to X to mark major holidays and occasions. On May 26, Cook made a post to observe Memorial Day, an annual occasion in the U.S. to honor and mourn fallen U.S. military personnel.We are grateful for all those who fought for our freedom. On Memorial Day, we reflect on the courage and selflessness of the heroes who made the ultimate sacrifice for our country.— Tim CookMay 26, 2025 Continue Reading on AppleInsider | Discuss on our Forums
    #tim #cook #marks #memorial #day
    Tim Cook marks Memorial Day with rememberance, and a message of gratitude
    Apple CEO Tim Cook has marked Memorial Day with a social media post remembering and thanking fallen heroes.Apple CEO Tim CookTim Cook habitually posts to X to mark major holidays and occasions. On May 26, Cook made a post to observe Memorial Day, an annual occasion in the U.S. to honor and mourn fallen U.S. military personnel.We are grateful for all those who fought for our freedom. On Memorial Day, we reflect on the courage and selflessness of the heroes who made the ultimate sacrifice for our country.— Tim CookMay 26, 2025 Continue Reading on AppleInsider | Discuss on our Forums #tim #cook #marks #memorial #day
    APPLEINSIDER.COM
    Tim Cook marks Memorial Day with rememberance, and a message of gratitude
    Apple CEO Tim Cook has marked Memorial Day with a social media post remembering and thanking fallen heroes.Apple CEO Tim CookTim Cook habitually posts to X to mark major holidays and occasions. On May 26, Cook made a post to observe Memorial Day, an annual occasion in the U.S. to honor and mourn fallen U.S. military personnel.We are grateful for all those who fought for our freedom. On Memorial Day, we reflect on the courage and selflessness of the heroes who made the ultimate sacrifice for our country.— Tim Cook (@tim_cook) May 26, 2025 Continue Reading on AppleInsider | Discuss on our Forums
    0 التعليقات 0 المشاركات