• AI isn’t coming for your job—it’s coming for your company

    Debate about whether artificial intelligence can replicate the intellectual labor of doctors, lawyers, or PhDs forgoes a deeper concern that’s looming: Entire companies—not just individual jobs—may be rendered obsolete by the accelerating pace of AI adoption.

    Reports suggesting OpenAI will charge per month for agents trained at a PhD level spun up the ongoing debate about whose job is safe from AI and whose job is not.

    “I’ve not seen it be that impressive yet, but it’s likely not far off,” James Villarrubia, head of digital innovation and AI at NASA CAS, told me.

    Sean McGregor, the founder of Responsible AI Collaborative who earned a PhD in computer science, pointed out how many jobs are about more than just a set of skills: “Current AI technology is not sufficiently robust to allow unsupervised control of hazardous chemistry equipment, human experimentation, or other domains where human PhDs are currently required.”

    The big reason I polled the audience on this one was because I wanted to broaden my perspective on what jobs would be eliminated. Instead, it changed my perspective.

    AI needs to outperform the system, not the role

    Suzanne Rabicoff, founder of the human agency think tank and fractional practice, The Pie Grower, gave me some reading assignments from her work, instead of a quote.

    Her work showed me that these times are unprecedented. But something clicked in my brain when she said in her writing that she liked the angle of more efficient companies rising instead of jobs being replaced at companies with a lot of tech and human capital debt. Her response to that statement? “Exactly my bet.” 

    Sure, this is the first time that a robot is doing the homework for some college students. However, there is more precedent for robots moving market share than for replacing the same job function across a sector.

    Fortune 500 companies—especially those bloated with legacy processes and redundant labor—are always vulnerable to decline as newer, more nimble competitors rise. And not because any single job is replaced, but because the foundational economics of their business models no longer hold.

    AI doesn’t need to outperform every employee to render an enterprise obsolete. It only needs to outperform the system.

    Case study: The auto industry

    Take, for example, the decline of American car manufacturers in the late 20th century.

    In the 1950s, American automakers had a stranglehold on the car industry, not unlike today’s tech giants. In 1950, the U.S. produced about 75% of the world’s cars.

    But in the 1970s, Japanese automakers pioneered the use of robotics in auto manufacturing. These companies produced higher-quality vehicles at great value thanks to leaner operations that were also more precise.

    Firms like GM struggled to keep up, burdened by outdated factories and excessive human capital costs—including bloated pensions.

    The seismic shift in the decades to follow paints a picture of what could be in store for large companies now. In 1960, the U.S. produced about 48% of the world’s cars, while Japan accounted for just 5%. By 1980, Japan had captured around 29% of the market, while the U.S. had fallen to 23%.

    Today’s AI shakeup could look similar. Decades from now, we could look at Apple similarly to how we look at Ford now. AI startups with more agile structures are poised to eat market share. On top of that, startups can focus on solving specialized problems, sharpening their competitive edge.

    Will your company shrivel and die?

    The fallout has already begun. Gartner surveyed organizations in late 2023, finding that about half were developing their own AI tools. By the end of 2024, that dropped to 20%. As hype around generative AI cools, Gartner notes that many chief information officers are instead using outside vendors—either large language model providers or traditional software sellers with AI-enhanced offerings. In 2024, AI startups received nearly half of the billion in global venture funding. If only 20% of legacy organizations currently feel confident competing with these upstarts, how many will feel that confidence as these startups mature?

    While headlines continue to fixate on whether AI can match PhD-level expertise, the deeper risk remains largely unspoken: Giant companies will shrivel and some may die. And when they do, your job is at risk whether you greet customers at the front desk or hold a PhD in an engineering discipline.

    But there are ways to stay afloat. One of the most impactful pieces of advice I ever received came from Jonathan Rosenberg, former SVP of products at Google and current advisor to Alphabet, when I visited the company’s campus in college. “You can’t just be great at what you do, you have to catch a great wave. Early people think it’s about the company, then the job, then the industry. It’s actually industry, company, job…”

    So, how do you catch the AI wave?

    Ankur Patel, CEO of Multimodal, advises workers to learn how to do their current jobs using AI tools that enhance productivity. He also notes that soft skills—mobilizing people, building relationships, leading teams—will become increasingly valuable as AI takes over more technical or routine tasks.

    “You can’t have AI be a group leader or team leader, right? I just don’t see that happening, even in the next generation forward,” Patel said. “So I think that’s a huge opportunity…to grow and learn from.”

    The bottom line is this: Even if the AI wave doesn’t replace you, it may replace the place you work. Will you get hit by the AI wave—or will you catch it?

    George Kailas is CEO of Prospero.ai.
    #isnt #coming #your #jobits #company
    AI isn’t coming for your job—it’s coming for your company
    Debate about whether artificial intelligence can replicate the intellectual labor of doctors, lawyers, or PhDs forgoes a deeper concern that’s looming: Entire companies—not just individual jobs—may be rendered obsolete by the accelerating pace of AI adoption. Reports suggesting OpenAI will charge per month for agents trained at a PhD level spun up the ongoing debate about whose job is safe from AI and whose job is not. “I’ve not seen it be that impressive yet, but it’s likely not far off,” James Villarrubia, head of digital innovation and AI at NASA CAS, told me. Sean McGregor, the founder of Responsible AI Collaborative who earned a PhD in computer science, pointed out how many jobs are about more than just a set of skills: “Current AI technology is not sufficiently robust to allow unsupervised control of hazardous chemistry equipment, human experimentation, or other domains where human PhDs are currently required.” The big reason I polled the audience on this one was because I wanted to broaden my perspective on what jobs would be eliminated. Instead, it changed my perspective. AI needs to outperform the system, not the role Suzanne Rabicoff, founder of the human agency think tank and fractional practice, The Pie Grower, gave me some reading assignments from her work, instead of a quote. Her work showed me that these times are unprecedented. But something clicked in my brain when she said in her writing that she liked the angle of more efficient companies rising instead of jobs being replaced at companies with a lot of tech and human capital debt. Her response to that statement? “Exactly my bet.”  Sure, this is the first time that a robot is doing the homework for some college students. However, there is more precedent for robots moving market share than for replacing the same job function across a sector. Fortune 500 companies—especially those bloated with legacy processes and redundant labor—are always vulnerable to decline as newer, more nimble competitors rise. And not because any single job is replaced, but because the foundational economics of their business models no longer hold. AI doesn’t need to outperform every employee to render an enterprise obsolete. It only needs to outperform the system. Case study: The auto industry Take, for example, the decline of American car manufacturers in the late 20th century. In the 1950s, American automakers had a stranglehold on the car industry, not unlike today’s tech giants. In 1950, the U.S. produced about 75% of the world’s cars. But in the 1970s, Japanese automakers pioneered the use of robotics in auto manufacturing. These companies produced higher-quality vehicles at great value thanks to leaner operations that were also more precise. Firms like GM struggled to keep up, burdened by outdated factories and excessive human capital costs—including bloated pensions. The seismic shift in the decades to follow paints a picture of what could be in store for large companies now. In 1960, the U.S. produced about 48% of the world’s cars, while Japan accounted for just 5%. By 1980, Japan had captured around 29% of the market, while the U.S. had fallen to 23%. Today’s AI shakeup could look similar. Decades from now, we could look at Apple similarly to how we look at Ford now. AI startups with more agile structures are poised to eat market share. On top of that, startups can focus on solving specialized problems, sharpening their competitive edge. Will your company shrivel and die? The fallout has already begun. Gartner surveyed organizations in late 2023, finding that about half were developing their own AI tools. By the end of 2024, that dropped to 20%. As hype around generative AI cools, Gartner notes that many chief information officers are instead using outside vendors—either large language model providers or traditional software sellers with AI-enhanced offerings. In 2024, AI startups received nearly half of the billion in global venture funding. If only 20% of legacy organizations currently feel confident competing with these upstarts, how many will feel that confidence as these startups mature? While headlines continue to fixate on whether AI can match PhD-level expertise, the deeper risk remains largely unspoken: Giant companies will shrivel and some may die. And when they do, your job is at risk whether you greet customers at the front desk or hold a PhD in an engineering discipline. But there are ways to stay afloat. One of the most impactful pieces of advice I ever received came from Jonathan Rosenberg, former SVP of products at Google and current advisor to Alphabet, when I visited the company’s campus in college. “You can’t just be great at what you do, you have to catch a great wave. Early people think it’s about the company, then the job, then the industry. It’s actually industry, company, job…” So, how do you catch the AI wave? Ankur Patel, CEO of Multimodal, advises workers to learn how to do their current jobs using AI tools that enhance productivity. He also notes that soft skills—mobilizing people, building relationships, leading teams—will become increasingly valuable as AI takes over more technical or routine tasks. “You can’t have AI be a group leader or team leader, right? I just don’t see that happening, even in the next generation forward,” Patel said. “So I think that’s a huge opportunity…to grow and learn from.” The bottom line is this: Even if the AI wave doesn’t replace you, it may replace the place you work. Will you get hit by the AI wave—or will you catch it? George Kailas is CEO of Prospero.ai. #isnt #coming #your #jobits #company
    WWW.FASTCOMPANY.COM
    AI isn’t coming for your job—it’s coming for your company
    Debate about whether artificial intelligence can replicate the intellectual labor of doctors, lawyers, or PhDs forgoes a deeper concern that’s looming: Entire companies—not just individual jobs—may be rendered obsolete by the accelerating pace of AI adoption. Reports suggesting OpenAI will charge $20,000 per month for agents trained at a PhD level spun up the ongoing debate about whose job is safe from AI and whose job is not. “I’ve not seen it be that impressive yet, but it’s likely not far off,” James Villarrubia, head of digital innovation and AI at NASA CAS, told me. Sean McGregor, the founder of Responsible AI Collaborative who earned a PhD in computer science, pointed out how many jobs are about more than just a set of skills: “Current AI technology is not sufficiently robust to allow unsupervised control of hazardous chemistry equipment, human experimentation, or other domains where human PhDs are currently required.” The big reason I polled the audience on this one was because I wanted to broaden my perspective on what jobs would be eliminated. Instead, it changed my perspective. AI needs to outperform the system, not the role Suzanne Rabicoff, founder of the human agency think tank and fractional practice, The Pie Grower, gave me some reading assignments from her work, instead of a quote. Her work showed me that these times are unprecedented. But something clicked in my brain when she said in her writing that she liked the angle of more efficient companies rising instead of jobs being replaced at companies with a lot of tech and human capital debt. Her response to that statement? “Exactly my bet.”  Sure, this is the first time that a robot is doing the homework for some college students. However, there is more precedent for robots moving market share than for replacing the same job function across a sector. Fortune 500 companies—especially those bloated with legacy processes and redundant labor—are always vulnerable to decline as newer, more nimble competitors rise. And not because any single job is replaced, but because the foundational economics of their business models no longer hold. AI doesn’t need to outperform every employee to render an enterprise obsolete. It only needs to outperform the system. Case study: The auto industry Take, for example, the decline of American car manufacturers in the late 20th century. In the 1950s, American automakers had a stranglehold on the car industry, not unlike today’s tech giants. In 1950, the U.S. produced about 75% of the world’s cars. But in the 1970s, Japanese automakers pioneered the use of robotics in auto manufacturing. These companies produced higher-quality vehicles at great value thanks to leaner operations that were also more precise. Firms like GM struggled to keep up, burdened by outdated factories and excessive human capital costs—including bloated pensions. The seismic shift in the decades to follow paints a picture of what could be in store for large companies now. In 1960, the U.S. produced about 48% of the world’s cars, while Japan accounted for just 5%. By 1980, Japan had captured around 29% of the market, while the U.S. had fallen to 23%. Today’s AI shakeup could look similar. Decades from now, we could look at Apple similarly to how we look at Ford now. AI startups with more agile structures are poised to eat market share. On top of that, startups can focus on solving specialized problems, sharpening their competitive edge. Will your company shrivel and die? The fallout has already begun. Gartner surveyed organizations in late 2023, finding that about half were developing their own AI tools. By the end of 2024, that dropped to 20%. As hype around generative AI cools, Gartner notes that many chief information officers are instead using outside vendors—either large language model providers or traditional software sellers with AI-enhanced offerings. In 2024, AI startups received nearly half of the $209 billion in global venture funding. If only 20% of legacy organizations currently feel confident competing with these upstarts, how many will feel that confidence as these startups mature? While headlines continue to fixate on whether AI can match PhD-level expertise, the deeper risk remains largely unspoken: Giant companies will shrivel and some may die. And when they do, your job is at risk whether you greet customers at the front desk or hold a PhD in an engineering discipline. But there are ways to stay afloat. One of the most impactful pieces of advice I ever received came from Jonathan Rosenberg, former SVP of products at Google and current advisor to Alphabet, when I visited the company’s campus in college. “You can’t just be great at what you do, you have to catch a great wave. Early people think it’s about the company, then the job, then the industry. It’s actually industry, company, job…” So, how do you catch the AI wave? Ankur Patel, CEO of Multimodal, advises workers to learn how to do their current jobs using AI tools that enhance productivity. He also notes that soft skills—mobilizing people, building relationships, leading teams—will become increasingly valuable as AI takes over more technical or routine tasks. “You can’t have AI be a group leader or team leader, right? I just don’t see that happening, even in the next generation forward,” Patel said. “So I think that’s a huge opportunity…to grow and learn from.” The bottom line is this: Even if the AI wave doesn’t replace you, it may replace the place you work. Will you get hit by the AI wave—or will you catch it? George Kailas is CEO of Prospero.ai.
    Like
    Love
    Wow
    Sad
    Angry
    205
    0 Comments 0 Shares 0 Reviews
  • Patel Taylor unveils images for 54-storey Canary Wharf tower

    How the 54-storey towerwould look when built
    Architect Patel Taylor has unveiled images of what one of London’s tallest residential towers in Canary Wharf would look like.
    The 54-storey 77 Marsh Wall scheme is being developed by Areli Developments on behalf of British Airways Pension Trustees and would contain around 820 homes above a mixed-use podium which will include retail, restaurant and café space.
    It would be Canary Wharf’s third tallest tower if built, behind the 235m One Canada Square and 233m Landmark Pinnacle.
    The scheme would require the demolition of the site’s existing building, a 17-storey office block built in the early 1990s known as Sierra Quebec Bravo.

    The 77 Marsh Wall scheme would include restaurants and retail at ground floor level
    Areli said the existing building offers “very little in the way of benefits to the community” and that it wanted to maximise the “unique and exciting” potential of the waterfront site with new public spaces, shops and restaurants.
    The podium would contain around 4,000sq m of retail, leisure and workspace along with a cinema and cycle parking under early plans aired in a public consultation. Green space is also included in the plans which saw two public consultation events held last month.
    Homes in the tower above the podium would be of a mix of tenures including shared ownership, build to rent, social rent, apart-hotel and co-living.

    The site’s existing 17-storey office block would be demolished
    An environmental impact assessment scoping report has been drawn up by consultant Trium for to Tower Hamlets council with a planning application expected to be submitted later this summer.
    Other firms currently on the project team include planning consultant DP9 and communications firm Kanda Consulting.
    #patel #taylor #unveils #images #54storey
    Patel Taylor unveils images for 54-storey Canary Wharf tower
    How the 54-storey towerwould look when built Architect Patel Taylor has unveiled images of what one of London’s tallest residential towers in Canary Wharf would look like. The 54-storey 77 Marsh Wall scheme is being developed by Areli Developments on behalf of British Airways Pension Trustees and would contain around 820 homes above a mixed-use podium which will include retail, restaurant and café space. It would be Canary Wharf’s third tallest tower if built, behind the 235m One Canada Square and 233m Landmark Pinnacle. The scheme would require the demolition of the site’s existing building, a 17-storey office block built in the early 1990s known as Sierra Quebec Bravo. The 77 Marsh Wall scheme would include restaurants and retail at ground floor level Areli said the existing building offers “very little in the way of benefits to the community” and that it wanted to maximise the “unique and exciting” potential of the waterfront site with new public spaces, shops and restaurants. The podium would contain around 4,000sq m of retail, leisure and workspace along with a cinema and cycle parking under early plans aired in a public consultation. Green space is also included in the plans which saw two public consultation events held last month. Homes in the tower above the podium would be of a mix of tenures including shared ownership, build to rent, social rent, apart-hotel and co-living. The site’s existing 17-storey office block would be demolished An environmental impact assessment scoping report has been drawn up by consultant Trium for to Tower Hamlets council with a planning application expected to be submitted later this summer. Other firms currently on the project team include planning consultant DP9 and communications firm Kanda Consulting. #patel #taylor #unveils #images #54storey
    WWW.BDONLINE.CO.UK
    Patel Taylor unveils images for 54-storey Canary Wharf tower
    How the 54-storey tower (centre) would look when built Architect Patel Taylor has unveiled images of what one of London’s tallest residential towers in Canary Wharf would look like. The 54-storey 77 Marsh Wall scheme is being developed by Areli Developments on behalf of British Airways Pension Trustees and would contain around 820 homes above a mixed-use podium which will include retail, restaurant and café space. It would be Canary Wharf’s third tallest tower if built, behind the 235m One Canada Square and 233m Landmark Pinnacle. The scheme would require the demolition of the site’s existing building, a 17-storey office block built in the early 1990s known as Sierra Quebec Bravo. The 77 Marsh Wall scheme would include restaurants and retail at ground floor level Areli said the existing building offers “very little in the way of benefits to the community” and that it wanted to maximise the “unique and exciting” potential of the waterfront site with new public spaces, shops and restaurants. The podium would contain around 4,000sq m of retail, leisure and workspace along with a cinema and cycle parking under early plans aired in a public consultation. Green space is also included in the plans which saw two public consultation events held last month. Homes in the tower above the podium would be of a mix of tenures including shared ownership, build to rent, social rent, apart-hotel and co-living. The site’s existing 17-storey office block would be demolished An environmental impact assessment scoping report has been drawn up by consultant Trium for to Tower Hamlets council with a planning application expected to be submitted later this summer. Other firms currently on the project team include planning consultant DP9 and communications firm Kanda Consulting.
    0 Comments 0 Shares 0 Reviews
  • The future of engineering belongs to those who build with AI, not without it

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs.
    But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves.
    The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive.
    While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations.
    How AI is transforming engineering work
    The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it. 
    AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation.
    Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence.
    Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems.
    Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries. 
    In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service. 
    The new engineering skills landscape
    As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results.
    Similar to how DevOps emerged as a discipline, large language model operationsfocuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs.
    Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations.
    Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand. 
    Keys to successful AI integration
    Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable.
    Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety.
    Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide.
    As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook.
    Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement.
    Rizwan Patel is head of information security and emerging technology at Altimetrik. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #future #engineering #belongs #those #who
    The future of engineering belongs to those who build with AI, not without it
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs. But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves. The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive. While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations. How AI is transforming engineering work The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it.  AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation. Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence. Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems. Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries.  In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service.  The new engineering skills landscape As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results. Similar to how DevOps emerged as a discipline, large language model operationsfocuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs. Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations. Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand.  Keys to successful AI integration Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable. Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety. Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide. As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook. Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement. Rizwan Patel is head of information security and emerging technology at Altimetrik.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #future #engineering #belongs #those #who
    VENTUREBEAT.COM
    The future of engineering belongs to those who build with AI, not without it
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs. But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves. The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive. While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations. How AI is transforming engineering work The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it.  AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation. Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence. Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems. Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries.  In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service.  The new engineering skills landscape As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results. Similar to how DevOps emerged as a discipline, large language model operations (LLMOps) focuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs. Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations. Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand.  Keys to successful AI integration Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable. Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety. Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide. As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook. Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement. Rizwan Patel is head of information security and emerging technology at Altimetrik.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares 0 Reviews
  • U.S President Donald Trump’s Chief Of Staff’s Personal Phone Was Hacked, With The Retrieved Information Used To Contact Influential Individuals And Officials

    White House Chief of Staff Susie Wiles uses a phone as she attends a National Day of Prayer event hosted by President Donald Trump in the Rose Garden at the White House, May 1, 2025 in Washington / Image credits - Andrew Harnik/Getty Images

    The personal phone of Susie Wiles, the U.S. President Donald Trump’s chief of staff, was allegedly hacked, with the individual responsible obtaining access to a bevy of contacts, including high-profile officials. According to the latest report, a federal probe has been launched, but there is no confirmation on how the phone was compromised in the first place.
    The contacts present in Susie Wiles’ phone grew suspicious after the impersonator asked to move the conversation to Telegram, risking the leaking of sensitive information
    Shortly after gaining access to the White House chief of staff’s personal phone, the hackers leveraged AI to impersonate Wiles’ likeness and sent multiple contacts voice and text messages from a different number. It was only after the person or persons on the other end recommended continuing the conversation to a private platform like Telegram that the contacts realized that something was off. FBI Director Kash Patel shared the following statement with CBS News regarding the incident.
    “The FBI takes all threats against the President, his staff, and our cybersecurity with the utmost seriousness; safeguarding our administration officials’ ability to securely communicate to accomplish the President's mission is a top priority.”
    As for how Wiles’ phone was compromised, TechCrunch asked White House spokesperson Anna Kelly if a cloud account associated with the chief of staff’s device was compromised, or if her handset was a part of a more sophisticated attack involving government-grade spyware. Unfortunately, the outlet did not receive a meaningful response, suggesting that the investigation is still ongoing.
    This is the second incident in which Wiles has been targeted by hackers, with the first instance transpiring in 2024, when it was reported that Iranian cyber-espionage experts attempted to obtain access to her personal email account. A separate report claims that these individuals were successful in bypassing the security as they obtained a dossier on Vice President JD Vance, who was Donald Trump’s running mate at the time.
    Going over a few images, we realized that the U.S. President’s chief of staff is currently in possession of an iPhone, which should cause even more concern because Apple prides itself on its robust security and privacy.
    News Source: The Wall Street Journal
    #president #donald #trumps #chief #staffs
    U.S President Donald Trump’s Chief Of Staff’s Personal Phone Was Hacked, With The Retrieved Information Used To Contact Influential Individuals And Officials
    White House Chief of Staff Susie Wiles uses a phone as she attends a National Day of Prayer event hosted by President Donald Trump in the Rose Garden at the White House, May 1, 2025 in Washington / Image credits - Andrew Harnik/Getty Images The personal phone of Susie Wiles, the U.S. President Donald Trump’s chief of staff, was allegedly hacked, with the individual responsible obtaining access to a bevy of contacts, including high-profile officials. According to the latest report, a federal probe has been launched, but there is no confirmation on how the phone was compromised in the first place. The contacts present in Susie Wiles’ phone grew suspicious after the impersonator asked to move the conversation to Telegram, risking the leaking of sensitive information Shortly after gaining access to the White House chief of staff’s personal phone, the hackers leveraged AI to impersonate Wiles’ likeness and sent multiple contacts voice and text messages from a different number. It was only after the person or persons on the other end recommended continuing the conversation to a private platform like Telegram that the contacts realized that something was off. FBI Director Kash Patel shared the following statement with CBS News regarding the incident. “The FBI takes all threats against the President, his staff, and our cybersecurity with the utmost seriousness; safeguarding our administration officials’ ability to securely communicate to accomplish the President's mission is a top priority.” As for how Wiles’ phone was compromised, TechCrunch asked White House spokesperson Anna Kelly if a cloud account associated with the chief of staff’s device was compromised, or if her handset was a part of a more sophisticated attack involving government-grade spyware. Unfortunately, the outlet did not receive a meaningful response, suggesting that the investigation is still ongoing. This is the second incident in which Wiles has been targeted by hackers, with the first instance transpiring in 2024, when it was reported that Iranian cyber-espionage experts attempted to obtain access to her personal email account. A separate report claims that these individuals were successful in bypassing the security as they obtained a dossier on Vice President JD Vance, who was Donald Trump’s running mate at the time. Going over a few images, we realized that the U.S. President’s chief of staff is currently in possession of an iPhone, which should cause even more concern because Apple prides itself on its robust security and privacy. News Source: The Wall Street Journal #president #donald #trumps #chief #staffs
    WCCFTECH.COM
    U.S President Donald Trump’s Chief Of Staff’s Personal Phone Was Hacked, With The Retrieved Information Used To Contact Influential Individuals And Officials
    White House Chief of Staff Susie Wiles uses a phone as she attends a National Day of Prayer event hosted by President Donald Trump in the Rose Garden at the White House, May 1, 2025 in Washington / Image credits - Andrew Harnik/Getty Images The personal phone of Susie Wiles, the U.S. President Donald Trump’s chief of staff, was allegedly hacked, with the individual responsible obtaining access to a bevy of contacts, including high-profile officials. According to the latest report, a federal probe has been launched, but there is no confirmation on how the phone was compromised in the first place. The contacts present in Susie Wiles’ phone grew suspicious after the impersonator asked to move the conversation to Telegram, risking the leaking of sensitive information Shortly after gaining access to the White House chief of staff’s personal phone, the hackers leveraged AI to impersonate Wiles’ likeness and sent multiple contacts voice and text messages from a different number. It was only after the person or persons on the other end recommended continuing the conversation to a private platform like Telegram that the contacts realized that something was off. FBI Director Kash Patel shared the following statement with CBS News regarding the incident. “The FBI takes all threats against the President, his staff, and our cybersecurity with the utmost seriousness; safeguarding our administration officials’ ability to securely communicate to accomplish the President's mission is a top priority.” As for how Wiles’ phone was compromised, TechCrunch asked White House spokesperson Anna Kelly if a cloud account associated with the chief of staff’s device was compromised, or if her handset was a part of a more sophisticated attack involving government-grade spyware. Unfortunately, the outlet did not receive a meaningful response, suggesting that the investigation is still ongoing. This is the second incident in which Wiles has been targeted by hackers, with the first instance transpiring in 2024, when it was reported that Iranian cyber-espionage experts attempted to obtain access to her personal email account. A separate report claims that these individuals were successful in bypassing the security as they obtained a dossier on Vice President JD Vance, who was Donald Trump’s running mate at the time. Going over a few images, we realized that the U.S. President’s chief of staff is currently in possession of an iPhone, which should cause even more concern because Apple prides itself on its robust security and privacy. News Source: The Wall Street Journal
    0 Comments 0 Shares 0 Reviews
  • AI Pace Layers: a framework for resilient product design

    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layersIn his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets..“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built.Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layersas sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientistshave been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fastand slowlayers. .Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UXJesse James Garrett’s classic The Elements of User Experiencepresents a five-layer model for digital design:SurfaceSkeletonStructureScopeStrategyStructure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fastFocus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly. Real-time adaptations to context.SkinPace of change: Moderately fastFocus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns, content style guides.ServicesPace of change: Wildly variableFocus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change.• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external toolsContinual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slowFocus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: SlowFocus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements, error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slowFocus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troublesBookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline. But it lacks effective tools for the job. Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work.b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks, as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemmaThousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI modelis familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecturewas tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms, MedicalMind’s AI can recognize potentially valuable cross-specialty patterns. But these insights can’t be optimally organized and presented because the underlying information architecturedoesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecturethat can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and stylesspecifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneckScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI enginerecently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interfaceremains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results, the AI generates a rich, multidimensional analysis, but must flatten this complex reasoning into linear text. Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualizationthat helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #pace #layers #framework #resilient #product
    AI Pace Layers: a framework for resilient product design
    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layersIn his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets..“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built.Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layersas sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientistshave been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fastand slowlayers. .Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UXJesse James Garrett’s classic The Elements of User Experiencepresents a five-layer model for digital design:SurfaceSkeletonStructureScopeStrategyStructure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fastFocus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly. Real-time adaptations to context.SkinPace of change: Moderately fastFocus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns, content style guides.ServicesPace of change: Wildly variableFocus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change.• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external toolsContinual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slowFocus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: SlowFocus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements, error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slowFocus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troublesBookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline. But it lacks effective tools for the job. Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work.b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks, as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemmaThousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI modelis familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecturewas tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms, MedicalMind’s AI can recognize potentially valuable cross-specialty patterns. But these insights can’t be optimally organized and presented because the underlying information architecturedoesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecturethat can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and stylesspecifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneckScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI enginerecently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interfaceremains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results, the AI generates a rich, multidimensional analysis, but must flatten this complex reasoning into linear text. Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualizationthat helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #pace #layers #framework #resilient #product
    UXDESIGN.CC
    AI Pace Layers: a framework for resilient product design
    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layers (Duffy and Brand)In his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets. (Frank Duffy, 1992).“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built (Stewart Brand, 1994).Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers (Brand)In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layers (1999) as sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientists (such as R. V. O’Neill and C. S. Holling) have been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fast (upper) and slow (lower) layers. (Brand).Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UX (Garrett)Jesse James Garrett’s classic The Elements of User Experience (2002) presents a five-layer model for digital design:Surface (visual design)Skeleton (interface design, navigation design, information design)Structure (interaction design, information architecture)Scope (functional specs, content requirements)Strategy (user needs, site objectives)Structure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design (2002)This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fast (milliseconds to minutes)Focus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly (using building blocks provided by lower levels). Real-time adaptations to context.SkinPace of change: Moderately fast (days to months)Focus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns (UI, interaction, and conversation), content style guides.ServicesPace of change: Wildly variable (slow to moderately fast)Focus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change. (But its components rarely change faster than Skin, or slower than Skeleton.)• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external tools (APIs, A2A, MCP, etc.) Continual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slow (months) Focus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: Slow (months to years)Focus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements (volume, diversity, quality standards), error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slow (years)Focus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troubles(Friction between Sessions and Skin)Bookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline (Sessions layer). But it lacks effective tools for the job (Skin layer). Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work. (For instance, a session tagging system can flag all interaction dead-ends during date choice interactions.)b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks (in the Services layer), as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemma(Friction between Services and Skeleton)Thousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI model (Services layer) is familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecture (Skeleton layer) was tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms (Sessions layer), MedicalMind’s AI can recognize potentially valuable cross-specialty patterns (Services layer). But these insights can’t be optimally organized and presented because the underlying information architecture (Skeleton layer) doesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecture (Skeleton layer) that can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and styles (in the Skin layer) specifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneck(Friction between Skin and Services)ScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI engine (Services layer) recently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interface (Skin layer) remains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results (Sessions layer), the AI generates a rich, multidimensional analysis (Services layer), but must flatten this complex reasoning into linear text (Skin layer). Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualization (Skin) that helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares 0 Reviews
  • Avowed game director Carrie Patel joins Night School Studios

    Avowed game director Carrie Patel joins Night School Studios
    Patel was at Obsidian Entertainment for nearly 12 years, working on The Outer Worlds and Pillars of Eternity

    News

    by Sophie McEvoy
    Staff Writer

    Published on May 21, 2025

    Former Avowed game director Carrie Patel has joined Netflix-owned Night School Studio.
    Announcing the news on LinkedIn, Patel gave no hint as to what project she'll be directing at the Oxenfree developer.
    Patel previously worked at Obsidian Entertainment for almost 12 years, most recently directing its action RPG Avowed. She joined the developer in 2013 as narrative designer, working on Pillars of Eternity and its White March expansions.
    Three years later, she was narrative co-lead on Pillars of Eternity 2: Deadfire.
    She then became narrative designer in 2018, working on The Outer Worlds and its second expansion Murder on Eridanos. She also directed the game's first expansion Peril on Gorgon.
    Night School Studio became Netflix's first games acquisition in 2021. At the time, the streaming giant confirmed the developer would continue working on Oxenfree 2: Lost Signals, which launched in July 2023.
    GamesIndustry.biz spoke to the studio following the release of the sequel, with its founder and studio director Sean Krankel discussing how Netflix has changed the developer.
    Earlier this year, Night School Studio reportedly cut an undisclosed number of staff. Netflix also cancelled six of its game launches to "adjustportfolio" to better suit its subscribers.
    This included Compass Point: West, Thirsty Suitors, and Tales of the Shire: A Lord of the Rings Game.
    #avowed #game #director #carrie #patel
    Avowed game director Carrie Patel joins Night School Studios
    Avowed game director Carrie Patel joins Night School Studios Patel was at Obsidian Entertainment for nearly 12 years, working on The Outer Worlds and Pillars of Eternity News by Sophie McEvoy Staff Writer Published on May 21, 2025 Former Avowed game director Carrie Patel has joined Netflix-owned Night School Studio. Announcing the news on LinkedIn, Patel gave no hint as to what project she'll be directing at the Oxenfree developer. Patel previously worked at Obsidian Entertainment for almost 12 years, most recently directing its action RPG Avowed. She joined the developer in 2013 as narrative designer, working on Pillars of Eternity and its White March expansions. Three years later, she was narrative co-lead on Pillars of Eternity 2: Deadfire. She then became narrative designer in 2018, working on The Outer Worlds and its second expansion Murder on Eridanos. She also directed the game's first expansion Peril on Gorgon. Night School Studio became Netflix's first games acquisition in 2021. At the time, the streaming giant confirmed the developer would continue working on Oxenfree 2: Lost Signals, which launched in July 2023. GamesIndustry.biz spoke to the studio following the release of the sequel, with its founder and studio director Sean Krankel discussing how Netflix has changed the developer. Earlier this year, Night School Studio reportedly cut an undisclosed number of staff. Netflix also cancelled six of its game launches to "adjustportfolio" to better suit its subscribers. This included Compass Point: West, Thirsty Suitors, and Tales of the Shire: A Lord of the Rings Game. #avowed #game #director #carrie #patel
    WWW.GAMESINDUSTRY.BIZ
    Avowed game director Carrie Patel joins Night School Studios
    Avowed game director Carrie Patel joins Night School Studios Patel was at Obsidian Entertainment for nearly 12 years, working on The Outer Worlds and Pillars of Eternity News by Sophie McEvoy Staff Writer Published on May 21, 2025 Former Avowed game director Carrie Patel has joined Netflix-owned Night School Studio. Announcing the news on LinkedIn, Patel gave no hint as to what project she'll be directing at the Oxenfree developer. Patel previously worked at Obsidian Entertainment for almost 12 years, most recently directing its action RPG Avowed. She joined the developer in 2013 as narrative designer, working on Pillars of Eternity and its White March expansions. Three years later, she was narrative co-lead on Pillars of Eternity 2: Deadfire. She then became narrative designer in 2018, working on The Outer Worlds and its second expansion Murder on Eridanos. She also directed the game's first expansion Peril on Gorgon. Night School Studio became Netflix's first games acquisition in 2021. At the time, the streaming giant confirmed the developer would continue working on Oxenfree 2: Lost Signals, which launched in July 2023. GamesIndustry.biz spoke to the studio following the release of the sequel, with its founder and studio director Sean Krankel discussing how Netflix has changed the developer. Earlier this year, Night School Studio reportedly cut an undisclosed number of staff. Netflix also cancelled six of its game launches to "adjust [its] portfolio" to better suit its subscribers. This included Compass Point: West, Thirsty Suitors, and Tales of the Shire: A Lord of the Rings Game.
    0 Comments 0 Shares 0 Reviews
  • Avowed game director Carrie Patel leaves Obsidian to join Netflix's gaming division

    Patel joined Obsidian in 2013 and served as a narrative designer on Pillars of Eternity and The Outer Worlds.
    #avowed #game #director #carrie #patel
    Avowed game director Carrie Patel leaves Obsidian to join Netflix's gaming division
    Patel joined Obsidian in 2013 and served as a narrative designer on Pillars of Eternity and The Outer Worlds. #avowed #game #director #carrie #patel
    HITMARKER.NET
    Avowed game director Carrie Patel leaves Obsidian to join Netflix's gaming division
    Patel joined Obsidian in 2013 and served as a narrative designer on Pillars of Eternity and The Outer Worlds.
    0 Comments 0 Shares 0 Reviews
  • Avowed Director Quits Obsidian After More Than a Decade for Job at Netflix-Owned Oxenfree Studio

    Avowed director Carrie Patel has quit legendary RPG company Obsidian Entertainment, just months after its most recent game's launch.In an update to her LinkedIn page, Patel revealed she had begun a new job at Night School, the Netflix-owned developer behind the Oxenfree series of narrative adventures."I'm happy to share that I'm starting a new position as Game Director at Night School: A Netflix Game Studio!" Patel wrote in a brief update. Patel's new role at Night School will again be as a game director, though what she's working on remains unannounced. PlayNight School is most famous for its Oxenfree series of games, the most recent being 2023's Oxenfree 2: Lost Signals. The Netflix-owned studio launched Black Mirror spin-off Thronglets earlier this year, around the same time it suffered an unknown number of layoffs. Months before, Netflix completely shut down another of its studios, working on a AAA game project headed up by Halo veteran Joseph Staten.Patel had been a veteran of Avowed developer Obsidian, and over 11 years worked in various senior positions on games such as Xbox sci-fi RPG The Outer Worlds and the classic Pillars of Eternity series.More recently, Patel had taken on the reigns of directing Avowed after the game was rebooted early in its development. Avowed had initially been planned with a darker fantasy setting closer to The Elder Scrolls, with one big open world and multiplayer co-op. Ultimately, Patel steered the game to launch as a brighter, more unique-looking experience, now featuring multiple large individual maps to explore, and an entirely single-player experience.Avowed - Xbox Developer Direct ScreenshotsThe response to Avowed was mostly positive, and Patel had initially discussed plans for the franchise to continue — either with expansions, a fully-fledged sequel, or both. Now, however, Patel won't be part of that future.An Avowed development roadmap announced last week detailed an array of mostly minor additions coming for free over the coming six months, including a Photo Mode and New Game Plus offering."With awesome worldbuilding and stellar character writing, Avowed reminds me why I fell in love with Obsidian's RPGs in the first place," reads IGN's Avowed review. "However, the bigger picture is that it plays it quite safe, with a by-the-numbers fantasy adventure that's more familiar than evolutionary."Obsidian's next project to launch is The Outer Worlds 2, which is due to be shown off in detail at this year's Xbox Games Showcase in June.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social‬
    #avowed #director #quits #obsidian #after
    Avowed Director Quits Obsidian After More Than a Decade for Job at Netflix-Owned Oxenfree Studio
    Avowed director Carrie Patel has quit legendary RPG company Obsidian Entertainment, just months after its most recent game's launch.In an update to her LinkedIn page, Patel revealed she had begun a new job at Night School, the Netflix-owned developer behind the Oxenfree series of narrative adventures."I'm happy to share that I'm starting a new position as Game Director at Night School: A Netflix Game Studio!" Patel wrote in a brief update. Patel's new role at Night School will again be as a game director, though what she's working on remains unannounced. PlayNight School is most famous for its Oxenfree series of games, the most recent being 2023's Oxenfree 2: Lost Signals. The Netflix-owned studio launched Black Mirror spin-off Thronglets earlier this year, around the same time it suffered an unknown number of layoffs. Months before, Netflix completely shut down another of its studios, working on a AAA game project headed up by Halo veteran Joseph Staten.Patel had been a veteran of Avowed developer Obsidian, and over 11 years worked in various senior positions on games such as Xbox sci-fi RPG The Outer Worlds and the classic Pillars of Eternity series.More recently, Patel had taken on the reigns of directing Avowed after the game was rebooted early in its development. Avowed had initially been planned with a darker fantasy setting closer to The Elder Scrolls, with one big open world and multiplayer co-op. Ultimately, Patel steered the game to launch as a brighter, more unique-looking experience, now featuring multiple large individual maps to explore, and an entirely single-player experience.Avowed - Xbox Developer Direct ScreenshotsThe response to Avowed was mostly positive, and Patel had initially discussed plans for the franchise to continue — either with expansions, a fully-fledged sequel, or both. Now, however, Patel won't be part of that future.An Avowed development roadmap announced last week detailed an array of mostly minor additions coming for free over the coming six months, including a Photo Mode and New Game Plus offering."With awesome worldbuilding and stellar character writing, Avowed reminds me why I fell in love with Obsidian's RPGs in the first place," reads IGN's Avowed review. "However, the bigger picture is that it plays it quite safe, with a by-the-numbers fantasy adventure that's more familiar than evolutionary."Obsidian's next project to launch is The Outer Worlds 2, which is due to be shown off in detail at this year's Xbox Games Showcase in June.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social‬ #avowed #director #quits #obsidian #after
    WWW.IGN.COM
    Avowed Director Quits Obsidian After More Than a Decade for Job at Netflix-Owned Oxenfree Studio
    Avowed director Carrie Patel has quit legendary RPG company Obsidian Entertainment, just months after its most recent game's launch.In an update to her LinkedIn page, Patel revealed she had begun a new job at Night School, the Netflix-owned developer behind the Oxenfree series of narrative adventures."I'm happy to share that I'm starting a new position as Game Director at Night School: A Netflix Game Studio!" Patel wrote in a brief update. Patel's new role at Night School will again be as a game director, though what she's working on remains unannounced. PlayNight School is most famous for its Oxenfree series of games, the most recent being 2023's Oxenfree 2: Lost Signals. The Netflix-owned studio launched Black Mirror spin-off Thronglets earlier this year, around the same time it suffered an unknown number of layoffs. Months before, Netflix completely shut down another of its studios, working on a AAA game project headed up by Halo veteran Joseph Staten.Patel had been a veteran of Avowed developer Obsidian, and over 11 years worked in various senior positions on games such as Xbox sci-fi RPG The Outer Worlds and the classic Pillars of Eternity series.More recently, Patel had taken on the reigns of directing Avowed after the game was rebooted early in its development. Avowed had initially been planned with a darker fantasy setting closer to The Elder Scrolls, with one big open world and multiplayer co-op. Ultimately, Patel steered the game to launch as a brighter, more unique-looking experience, now featuring multiple large individual maps to explore, and an entirely single-player experience.Avowed - Xbox Developer Direct ScreenshotsThe response to Avowed was mostly positive, and Patel had initially discussed plans for the franchise to continue — either with expansions, a fully-fledged sequel, or both. Now, however, Patel won't be part of that future.An Avowed development roadmap announced last week detailed an array of mostly minor additions coming for free over the coming six months, including a Photo Mode and New Game Plus offering."With awesome worldbuilding and stellar character writing, Avowed reminds me why I fell in love with Obsidian's RPGs in the first place," reads IGN's Avowed review. "However, the bigger picture is that it plays it quite safe, with a by-the-numbers fantasy adventure that's more familiar than evolutionary."Obsidian's next project to launch is The Outer Worlds 2, which is due to be shown off in detail at this year's Xbox Games Showcase in June.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social‬
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com