• What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design

    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it.
    In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it.
    The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about.
    I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either.
    Buddha In The Machine
    Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States.
    Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious.
    Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position.
    In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman.
    Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces.
    “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig

    Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be.

    Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think.
    So, with the Godhead in mind, to business.
    Classical And Romantic
    A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it.
    “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig

    If we were to characterize the two as bickering siblings, familiar adjectives might start to appear:

    Classical
    Romantic

    Dull
    Frivolous

    Awkward
    Irrational

    Ugly
    Erratic

    Mechanical
    Untrustworthy

    Cold
    Fleeting

    Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them?
    Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound.
    Steve Jobs was a famous advocate of this.
    “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs

    Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony.
    Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty.

    Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory.

    Classical
    Romantic

    Organized
    Vibrant

    Scaleable
    Evocative

    Reliable
    Playful

    Efficient
    Fun

    Replicable
    Expressive

    And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share.
    Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way.
    In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself.

    The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one.
    What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality.
    Quality
    The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience.
    “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig

    Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is.

    Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us.
    We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there.
    A Quality Web
    So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it?
    There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality.
    Here are a few habits that I think work in the service of more Quality on the web.
    Seek To Understand How Things Work
    I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding.
    To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse.
    “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle.
    So, in concrete terms:

    Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it.
    Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work?
    Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful.
    Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion.

    Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run.
    Reframe The Questions
    Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process.
    The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process.
    Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality?
    Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful.
    The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small.
    Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen.
    In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence.
    Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture.
    The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away.
    Make Time For Doing Nothing
    Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys.
    If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head.

    Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop!
    Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone.

    As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too.
    From time to time, let go of your sense of urgency.
    Spirit Of Play
    Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door.
    Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article.
    Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play.
    We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise.
    The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself.
    Other Resources

    Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
    The Beauty of Everyday Things by Soetsu Yanagi
    Tao Te Ching
    “The Creative Act” by Rick Rubin
    “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt
    “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva

    Further Reading on Smashing Magazine

    “Three Approaches To Amplify Your Design Projects,” Olivia De Alba
    “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag
    “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks
    “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    #what #zen #art #motorcycle #maintenance
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth #what #zen #art #motorcycle #maintenance
    SMASHINGMAGAZINE.COM
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feels (and often is) secondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With Science (And Whatever Else Fits The Bill) None of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    0 Комментарии 0 Поделились 0 предпросмотр
  • Abstracts: Zero-shot models in single-cell biology with Alex Lu

    TranscriptGRETCHEN HUIZINGA: Welcome to Abstracts, a Microsoft Research Podcast that puts the spotlight on world-class research in brief. I’m Gretchen Huizinga. In this series, members of the research community at Microsoft give us a quick snapshot – or a podcast abstract – of their new and noteworthy papers. On today’s episode, I’m talking to Alex Lu, a senior researcher at Microsoft Research and co-author of a paper called Assessing the Limits of Zero Shot Foundation Models in Single-cell Biology. Alex Lu, wonderful to have you on the podcast. Welcome to Abstracts! 

    ALEX LU: Yeah, I’m really excited to be joining you today. 
    HUIZINGA: So let’s start with a little background of your work. In just a few sentences, tell us about your study and more importantly, why it matters. 
    LU: Absolutely. And before I dive in, I want to give a shout out to the MSR research intern who actually did this work. This was led by Kasia Kedzierska, who interned with us two summers ago in 2023, and she’s the lead author on the study. But basically, in this research, we study single-cell foundation models, which have really recently rocked the world of biology, because they basically claim to be able to use AI to unlock understanding about single-cell biology. Biologists for a myriad of applications, everything from understanding how single cells differentiate into different kinds of cells, to discovering new drugs for cancer, will conduct experiments where they measure how much of every gene is expressed inside of just one single cell. So these experiments give us a powerful view into the cell’s internal state. But measurements from these experiments are incredibly complex. There are about 20,000 different human genes. So you get this really long chain of numbers that measure how much there is of 20,000 different genes. So deriving meaning from this really long chain of numbers is really difficult. And single-cell foundation models claim to be capable of unraveling deeper insights than ever before. So that’s the claim that these works have made. And in our recent paper, we showed that these models may actually not live up to these claims. Basically, we showed that single-cell foundation models perform worse in settings that are fundamental to biological discovery than much simpler machine learning and statistical methods that were used in the field before single-cell foundation models emerged and are the go-to standard for unpacking meaning from these complicated experiments. So in a nutshell, we should care about these results because it has implications on the toolkits that biologists use to understand their experiments. Our work suggests that single-cell foundation models may not be appropriate for practical use just yet, at least in the discovery applications that we cover. 
    HUIZINGA: Well, let’s go a little deeper there. Generative pre-trained transformer models, GPTs, are relatively new on the research scene in terms of how they’re being used in novel applications, which is what you’re interested in, like single-cell biology. So I’m curious, just sort of as a foundation, what other research has already been done in this area, and how does this study illuminate or build on it? 
    LU: Absolutely. Okay, so we were the first to notice and document this issue in single-cell foundation models, specifically. And this is because that we have proposed evaluation methods that, while are common in other areas of AI, have yet to be commonly used to evaluate single-cell foundation models. We performed something called zero-shot evaluation on these models. Prior to our work, most works evaluated single-cell foundation models with fine tuning. And the way to understand this is because single-cell foundation models are trained in a way that tries to expose these models to millions of single-cells. But because you’re exposing them to a large amount of data, you can’t really rely upon this data being annotated or like labeled in any particular fashion then. So in order for them to actually do the specialized tasks that are useful for biologists, you typically have to add on a second training phase. We call this the fine-tuning phase, where you have a smaller number of single cells, but now they are actually labeled with the specialized tasks that you want the model to perform. So most people, they typically evaluate the performance of single-cell models after they fine-tune these models. However, what we noticed is that this evaluating these fine-tuned models has several problems. First, it might not actually align with how these models are actually going to be used by biologists then. A critical distinction in biology is that we’re not just trying to interact with an agent that has access to knowledge through its pre-training, we’re trying to extend these models to discover new biology beyond the sphere of influence then. And so in many cases, the point of using these models, the point of analysis, is to explore the data with the goal of potentially discovering something new about the single cell that the biologists worked with that they weren’t aware of before. So in these kinds of cases, it is really tough to fine-tune a model. There’s a bit of a chicken and egg problem going on. If you don’t know, for example, there’s a new kind of cell in the data, you can’t really instruct the model to help us identify these kinds of new cells. So in other words, fine-tuning these models for those tasks essentially becomes impossible then. So the second issue is that evaluations on fine-tuned models can sometimes mislead us in our ability to understand how these models are working. So for example, the claim behind single-cell foundation model papers is that these models learn a foundation of biological knowledge by being exposed to millions of single cells in its first training phase, right? But it’s possible when you fine-tune a model, it may just be that any performance increases that you see using the model is simply because that you’re using a massive model that is really sophisticated, really large. And even if there’s any exposure to any cells at all then, that model is going to do perfectly fine then. So going back to our paper, what’s really different about this paper is that we propose zero-shot evaluation for these models. What that means is that we do not fine-tune the model at all, and instead we keep the model frozen during the analysis step. So how we specialize it to be a downstream task instead is that we extract the model’s internal embedding of single-cell data, which is essentially a numerical vector that contains information that the model is extracting and organizing from input data. So it’s essentially how the model perceives single-cell data and how it’s organizing in its own internal state. So basically, this is the better way for us to test the claim that single-cell foundation models are learning foundational biological insights. Because if they actually are learning these insights, they should be present in the models embedding space even before we fine-tune the model. 
    HUIZINGA: Well, let’s talk about methodology on this particular study. You focused on assessing existing models in zero-shot learning for single-cell biology. How did you go about evaluating these models? 
    LU: Yes, so let’s dive deeper into how zero-shot evaluations are conducted, okay? So the premise here is that we’re relying upon the fact that if these models are fully learning foundational biological insights, if we take the model’s internal representation of cells, then cells that are biologically similar should be close in that internal representation, where cells that are biologically distinct should be further apart. And that is exactly what we tested in our study. We compared two popular single-cell foundation models and importantly, we compared these models against older and reliable tools that biologists have used for exploratory analyses. So these include simpler machine learning methods like scVI, statistical algorithms like Harmony, and even basic data pre-processing steps, just like filtering your data down to a more robust subset of genes, then. So basically, we tested embeddings from our two single-cell foundation models against this baseline in a variety of settings. And we tested the hypothesis that biologically similar cells should be similar across these distinct methods across these datasets. 
    HUIZINGA: Well, and as you as you did the testing, you obviously were aiming towards research findings, which is my favorite part of a research paper, so tell us what you did find and what you feel the most important takeaways of this paper are. 
    LU: Absolutely. So in a nutshell, we found that these two newly proposed single-cell foundation models substantially underperformed compared to older methods then. So to contextualize why that is such a surprising result, there is a lot of hype around these methods. So basically, I think that,yeah, it’s a very surprising result, given how hyped these models are and how people were already adopting them. But our results basically caution that these shouldn’t really be adopted for these use purposes. 
    HUIZINGA: Yeah, so this is serious real-world impact here in terms of if models are being adopted and adapted in these applications, how reliable are they, et cetera? So given that, who would you say benefits most from what you’ve discovered in this paper and why? 
    LU: Okay, so two ways, right? So I think this has at least immediate implications on the way that we do discovery in biology. And as I’ve discussed, these experiments are used for cases that have practical impact, drug discovery applications, investigations into basic biology, then. But let’s also talk about the impact for methodologists, people who are trying to improve these single-cell foundation models, right? I think at the base, they’re really excited proposals. Because if you look at what some of the prior and less sophisticated methods couldn’t do, they tended to be more bespoke. So the excitement of single-cell foundation models is that you have this general-purpose model that can be used for everything and while they’re not living up to that purpose just now, just currently, I think that it’s important that we continue to bank onto that vision, right? So if you look at our contributions in that area, where single-cell foundation models are a really new proposal, so it makes sense that we may not know how to fully evaluate them just yet then. So you can view our work as basically being a step towards more rigorous evaluation of these models. Now that we did this experiment, I think the methodologists know to use this as a signal on how to improve the models and if they’re going in the right direction. And in fact, you are seeing more and more papers adopt zero-shot evaluations since we put out our paper then. And so this essentially helps future computer scientists that are working on single-cell foundation models know how to train better models. 
    HUIZINGA: That said, Alex, finally, what are the outstanding challenges that you identified for zero-shot learning research in biology, and what foundation might this paper lay for future research agendas in the field? 
    LU: Yeah, absolutely. So now that we’ve shown single-cell foundation models don’t necessarily perform well, I think the natural question on everyone’s mind is how do we actually train single-cell foundation models that live up to that vision, that can perform in helping us discover new biology then? So I think in the short term, yeah, we’re actively investigating many hypotheses in this area. So for example, my colleagues, Lorin Crawford and Ava Amini, who were co-authors in the paper, recently put out a pre-print understanding how training data composition impacts model performance. And so one of the surprising findings that they had was that many of the training data sets that people used to train single-cell foundation models are highly redundant, to the point that you can even sample just a tiny fraction of the data and get basically the same performance then. But you can also look forward to many other explorations in this area as we continue to develop this research at the end of the day. But also zooming out into the bigger picture, I think one major takeaway from this paper is that developing AI methods for biology requires thought about the context of use, right? I mean, this is obvious for any AI method then, but I think people have gotten just too used to taking methods that work out there for natural vision or natural language maybe in the consumer domain and then extrapolating these methods to biology and expecting that they will work in the same way then, right? So for example, one reason why zero-shot evaluation was not routine practice for single-cell foundation models prior to our work, I mean, we were the first to fully establish that as a practice for the field, was because I think people who have been working in AI for biology have been looking to these more mainstream AI domains to shape their work then. And so with single-cell foundation models, many of these models are adopted from large language models with natural language processing, recycling the exact same architecture, the exact same code, basically just recycling practices in that field then. So when you look at like practices in like more mainstream domains, zero-shot evaluation is definitely explored in those domains, but it’s more of like a niche instead of being considered central to model understanding. So again, because biology is different from mainstream language processing, it’s a scientific discipline, zero-shot evaluation becomes much more important, and you have no choice but to use these models, zero-shot then. So in other words, I think that we need to be thinking carefully about what it is that makes training a model for biology different from training a model, for example, for consumer purposes. HUIZINGA: Alex Lu, thanks for joining us today, and to our listeners, thanks for tuning in. If you want to read this paper, you can find a link at aka.ms/Abstracts, or you can read it on the Genome Biology website. See you next time on Abstracts!  
    #abstracts #zeroshot #models #singlecell #biology
    Abstracts: Zero-shot models in single-cell biology with Alex Lu
    TranscriptGRETCHEN HUIZINGA: Welcome to Abstracts, a Microsoft Research Podcast that puts the spotlight on world-class research in brief. I’m Gretchen Huizinga. In this series, members of the research community at Microsoft give us a quick snapshot – or a podcast abstract – of their new and noteworthy papers. On today’s episode, I’m talking to Alex Lu, a senior researcher at Microsoft Research and co-author of a paper called Assessing the Limits of Zero Shot Foundation Models in Single-cell Biology. Alex Lu, wonderful to have you on the podcast. Welcome to Abstracts!  ALEX LU: Yeah, I’m really excited to be joining you today.  HUIZINGA: So let’s start with a little background of your work. In just a few sentences, tell us about your study and more importantly, why it matters.  LU: Absolutely. And before I dive in, I want to give a shout out to the MSR research intern who actually did this work. This was led by Kasia Kedzierska, who interned with us two summers ago in 2023, and she’s the lead author on the study. But basically, in this research, we study single-cell foundation models, which have really recently rocked the world of biology, because they basically claim to be able to use AI to unlock understanding about single-cell biology. Biologists for a myriad of applications, everything from understanding how single cells differentiate into different kinds of cells, to discovering new drugs for cancer, will conduct experiments where they measure how much of every gene is expressed inside of just one single cell. So these experiments give us a powerful view into the cell’s internal state. But measurements from these experiments are incredibly complex. There are about 20,000 different human genes. So you get this really long chain of numbers that measure how much there is of 20,000 different genes. So deriving meaning from this really long chain of numbers is really difficult. And single-cell foundation models claim to be capable of unraveling deeper insights than ever before. So that’s the claim that these works have made. And in our recent paper, we showed that these models may actually not live up to these claims. Basically, we showed that single-cell foundation models perform worse in settings that are fundamental to biological discovery than much simpler machine learning and statistical methods that were used in the field before single-cell foundation models emerged and are the go-to standard for unpacking meaning from these complicated experiments. So in a nutshell, we should care about these results because it has implications on the toolkits that biologists use to understand their experiments. Our work suggests that single-cell foundation models may not be appropriate for practical use just yet, at least in the discovery applications that we cover.  HUIZINGA: Well, let’s go a little deeper there. Generative pre-trained transformer models, GPTs, are relatively new on the research scene in terms of how they’re being used in novel applications, which is what you’re interested in, like single-cell biology. So I’m curious, just sort of as a foundation, what other research has already been done in this area, and how does this study illuminate or build on it?  LU: Absolutely. Okay, so we were the first to notice and document this issue in single-cell foundation models, specifically. And this is because that we have proposed evaluation methods that, while are common in other areas of AI, have yet to be commonly used to evaluate single-cell foundation models. We performed something called zero-shot evaluation on these models. Prior to our work, most works evaluated single-cell foundation models with fine tuning. And the way to understand this is because single-cell foundation models are trained in a way that tries to expose these models to millions of single-cells. But because you’re exposing them to a large amount of data, you can’t really rely upon this data being annotated or like labeled in any particular fashion then. So in order for them to actually do the specialized tasks that are useful for biologists, you typically have to add on a second training phase. We call this the fine-tuning phase, where you have a smaller number of single cells, but now they are actually labeled with the specialized tasks that you want the model to perform. So most people, they typically evaluate the performance of single-cell models after they fine-tune these models. However, what we noticed is that this evaluating these fine-tuned models has several problems. First, it might not actually align with how these models are actually going to be used by biologists then. A critical distinction in biology is that we’re not just trying to interact with an agent that has access to knowledge through its pre-training, we’re trying to extend these models to discover new biology beyond the sphere of influence then. And so in many cases, the point of using these models, the point of analysis, is to explore the data with the goal of potentially discovering something new about the single cell that the biologists worked with that they weren’t aware of before. So in these kinds of cases, it is really tough to fine-tune a model. There’s a bit of a chicken and egg problem going on. If you don’t know, for example, there’s a new kind of cell in the data, you can’t really instruct the model to help us identify these kinds of new cells. So in other words, fine-tuning these models for those tasks essentially becomes impossible then. So the second issue is that evaluations on fine-tuned models can sometimes mislead us in our ability to understand how these models are working. So for example, the claim behind single-cell foundation model papers is that these models learn a foundation of biological knowledge by being exposed to millions of single cells in its first training phase, right? But it’s possible when you fine-tune a model, it may just be that any performance increases that you see using the model is simply because that you’re using a massive model that is really sophisticated, really large. And even if there’s any exposure to any cells at all then, that model is going to do perfectly fine then. So going back to our paper, what’s really different about this paper is that we propose zero-shot evaluation for these models. What that means is that we do not fine-tune the model at all, and instead we keep the model frozen during the analysis step. So how we specialize it to be a downstream task instead is that we extract the model’s internal embedding of single-cell data, which is essentially a numerical vector that contains information that the model is extracting and organizing from input data. So it’s essentially how the model perceives single-cell data and how it’s organizing in its own internal state. So basically, this is the better way for us to test the claim that single-cell foundation models are learning foundational biological insights. Because if they actually are learning these insights, they should be present in the models embedding space even before we fine-tune the model.  HUIZINGA: Well, let’s talk about methodology on this particular study. You focused on assessing existing models in zero-shot learning for single-cell biology. How did you go about evaluating these models?  LU: Yes, so let’s dive deeper into how zero-shot evaluations are conducted, okay? So the premise here is that we’re relying upon the fact that if these models are fully learning foundational biological insights, if we take the model’s internal representation of cells, then cells that are biologically similar should be close in that internal representation, where cells that are biologically distinct should be further apart. And that is exactly what we tested in our study. We compared two popular single-cell foundation models and importantly, we compared these models against older and reliable tools that biologists have used for exploratory analyses. So these include simpler machine learning methods like scVI, statistical algorithms like Harmony, and even basic data pre-processing steps, just like filtering your data down to a more robust subset of genes, then. So basically, we tested embeddings from our two single-cell foundation models against this baseline in a variety of settings. And we tested the hypothesis that biologically similar cells should be similar across these distinct methods across these datasets.  HUIZINGA: Well, and as you as you did the testing, you obviously were aiming towards research findings, which is my favorite part of a research paper, so tell us what you did find and what you feel the most important takeaways of this paper are.  LU: Absolutely. So in a nutshell, we found that these two newly proposed single-cell foundation models substantially underperformed compared to older methods then. So to contextualize why that is such a surprising result, there is a lot of hype around these methods. So basically, I think that,yeah, it’s a very surprising result, given how hyped these models are and how people were already adopting them. But our results basically caution that these shouldn’t really be adopted for these use purposes.  HUIZINGA: Yeah, so this is serious real-world impact here in terms of if models are being adopted and adapted in these applications, how reliable are they, et cetera? So given that, who would you say benefits most from what you’ve discovered in this paper and why?  LU: Okay, so two ways, right? So I think this has at least immediate implications on the way that we do discovery in biology. And as I’ve discussed, these experiments are used for cases that have practical impact, drug discovery applications, investigations into basic biology, then. But let’s also talk about the impact for methodologists, people who are trying to improve these single-cell foundation models, right? I think at the base, they’re really excited proposals. Because if you look at what some of the prior and less sophisticated methods couldn’t do, they tended to be more bespoke. So the excitement of single-cell foundation models is that you have this general-purpose model that can be used for everything and while they’re not living up to that purpose just now, just currently, I think that it’s important that we continue to bank onto that vision, right? So if you look at our contributions in that area, where single-cell foundation models are a really new proposal, so it makes sense that we may not know how to fully evaluate them just yet then. So you can view our work as basically being a step towards more rigorous evaluation of these models. Now that we did this experiment, I think the methodologists know to use this as a signal on how to improve the models and if they’re going in the right direction. And in fact, you are seeing more and more papers adopt zero-shot evaluations since we put out our paper then. And so this essentially helps future computer scientists that are working on single-cell foundation models know how to train better models.  HUIZINGA: That said, Alex, finally, what are the outstanding challenges that you identified for zero-shot learning research in biology, and what foundation might this paper lay for future research agendas in the field?  LU: Yeah, absolutely. So now that we’ve shown single-cell foundation models don’t necessarily perform well, I think the natural question on everyone’s mind is how do we actually train single-cell foundation models that live up to that vision, that can perform in helping us discover new biology then? So I think in the short term, yeah, we’re actively investigating many hypotheses in this area. So for example, my colleagues, Lorin Crawford and Ava Amini, who were co-authors in the paper, recently put out a pre-print understanding how training data composition impacts model performance. And so one of the surprising findings that they had was that many of the training data sets that people used to train single-cell foundation models are highly redundant, to the point that you can even sample just a tiny fraction of the data and get basically the same performance then. But you can also look forward to many other explorations in this area as we continue to develop this research at the end of the day. But also zooming out into the bigger picture, I think one major takeaway from this paper is that developing AI methods for biology requires thought about the context of use, right? I mean, this is obvious for any AI method then, but I think people have gotten just too used to taking methods that work out there for natural vision or natural language maybe in the consumer domain and then extrapolating these methods to biology and expecting that they will work in the same way then, right? So for example, one reason why zero-shot evaluation was not routine practice for single-cell foundation models prior to our work, I mean, we were the first to fully establish that as a practice for the field, was because I think people who have been working in AI for biology have been looking to these more mainstream AI domains to shape their work then. And so with single-cell foundation models, many of these models are adopted from large language models with natural language processing, recycling the exact same architecture, the exact same code, basically just recycling practices in that field then. So when you look at like practices in like more mainstream domains, zero-shot evaluation is definitely explored in those domains, but it’s more of like a niche instead of being considered central to model understanding. So again, because biology is different from mainstream language processing, it’s a scientific discipline, zero-shot evaluation becomes much more important, and you have no choice but to use these models, zero-shot then. So in other words, I think that we need to be thinking carefully about what it is that makes training a model for biology different from training a model, for example, for consumer purposes. HUIZINGA: Alex Lu, thanks for joining us today, and to our listeners, thanks for tuning in. If you want to read this paper, you can find a link at aka.ms/Abstracts, or you can read it on the Genome Biology website. See you next time on Abstracts!   #abstracts #zeroshot #models #singlecell #biology
    WWW.MICROSOFT.COM
    Abstracts: Zero-shot models in single-cell biology with Alex Lu
    Transcript [MUSIC] GRETCHEN HUIZINGA: Welcome to Abstracts, a Microsoft Research Podcast that puts the spotlight on world-class research in brief. I’m Gretchen Huizinga. In this series, members of the research community at Microsoft give us a quick snapshot – or a podcast abstract – of their new and noteworthy papers.  [MUSIC FADES] On today’s episode, I’m talking to Alex Lu, a senior researcher at Microsoft Research and co-author of a paper called Assessing the Limits of Zero Shot Foundation Models in Single-cell Biology. Alex Lu, wonderful to have you on the podcast. Welcome to Abstracts!  ALEX LU: Yeah, I’m really excited to be joining you today.  HUIZINGA: So let’s start with a little background of your work. In just a few sentences, tell us about your study and more importantly, why it matters.  LU: Absolutely. And before I dive in, I want to give a shout out to the MSR research intern who actually did this work. This was led by Kasia Kedzierska, who interned with us two summers ago in 2023, and she’s the lead author on the study. But basically, in this research, we study single-cell foundation models, which have really recently rocked the world of biology, because they basically claim to be able to use AI to unlock understanding about single-cell biology. Biologists for a myriad of applications, everything from understanding how single cells differentiate into different kinds of cells, to discovering new drugs for cancer, will conduct experiments where they measure how much of every gene is expressed inside of just one single cell. So these experiments give us a powerful view into the cell’s internal state. But measurements from these experiments are incredibly complex. There are about 20,000 different human genes. So you get this really long chain of numbers that measure how much there is of 20,000 different genes. So deriving meaning from this really long chain of numbers is really difficult. And single-cell foundation models claim to be capable of unraveling deeper insights than ever before. So that’s the claim that these works have made. And in our recent paper, we showed that these models may actually not live up to these claims. Basically, we showed that single-cell foundation models perform worse in settings that are fundamental to biological discovery than much simpler machine learning and statistical methods that were used in the field before single-cell foundation models emerged and are the go-to standard for unpacking meaning from these complicated experiments. So in a nutshell, we should care about these results because it has implications on the toolkits that biologists use to understand their experiments. Our work suggests that single-cell foundation models may not be appropriate for practical use just yet, at least in the discovery applications that we cover.  HUIZINGA: Well, let’s go a little deeper there. Generative pre-trained transformer models, GPTs, are relatively new on the research scene in terms of how they’re being used in novel applications, which is what you’re interested in, like single-cell biology. So I’m curious, just sort of as a foundation, what other research has already been done in this area, and how does this study illuminate or build on it?  LU: Absolutely. Okay, so we were the first to notice and document this issue in single-cell foundation models, specifically. And this is because that we have proposed evaluation methods that, while are common in other areas of AI, have yet to be commonly used to evaluate single-cell foundation models. We performed something called zero-shot evaluation on these models. Prior to our work, most works evaluated single-cell foundation models with fine tuning. And the way to understand this is because single-cell foundation models are trained in a way that tries to expose these models to millions of single-cells. But because you’re exposing them to a large amount of data, you can’t really rely upon this data being annotated or like labeled in any particular fashion then. So in order for them to actually do the specialized tasks that are useful for biologists, you typically have to add on a second training phase. We call this the fine-tuning phase, where you have a smaller number of single cells, but now they are actually labeled with the specialized tasks that you want the model to perform. So most people, they typically evaluate the performance of single-cell models after they fine-tune these models. However, what we noticed is that this evaluating these fine-tuned models has several problems. First, it might not actually align with how these models are actually going to be used by biologists then. A critical distinction in biology is that we’re not just trying to interact with an agent that has access to knowledge through its pre-training, we’re trying to extend these models to discover new biology beyond the sphere of influence then. And so in many cases, the point of using these models, the point of analysis, is to explore the data with the goal of potentially discovering something new about the single cell that the biologists worked with that they weren’t aware of before. So in these kinds of cases, it is really tough to fine-tune a model. There’s a bit of a chicken and egg problem going on. If you don’t know, for example, there’s a new kind of cell in the data, you can’t really instruct the model to help us identify these kinds of new cells. So in other words, fine-tuning these models for those tasks essentially becomes impossible then. So the second issue is that evaluations on fine-tuned models can sometimes mislead us in our ability to understand how these models are working. So for example, the claim behind single-cell foundation model papers is that these models learn a foundation of biological knowledge by being exposed to millions of single cells in its first training phase, right? But it’s possible when you fine-tune a model, it may just be that any performance increases that you see using the model is simply because that you’re using a massive model that is really sophisticated, really large. And even if there’s any exposure to any cells at all then, that model is going to do perfectly fine then. So going back to our paper, what’s really different about this paper is that we propose zero-shot evaluation for these models. What that means is that we do not fine-tune the model at all, and instead we keep the model frozen during the analysis step. So how we specialize it to be a downstream task instead is that we extract the model’s internal embedding of single-cell data, which is essentially a numerical vector that contains information that the model is extracting and organizing from input data. So it’s essentially how the model perceives single-cell data and how it’s organizing in its own internal state. So basically, this is the better way for us to test the claim that single-cell foundation models are learning foundational biological insights. Because if they actually are learning these insights, they should be present in the models embedding space even before we fine-tune the model.  HUIZINGA: Well, let’s talk about methodology on this particular study. You focused on assessing existing models in zero-shot learning for single-cell biology. How did you go about evaluating these models?  LU: Yes, so let’s dive deeper into how zero-shot evaluations are conducted, okay? So the premise here is that we’re relying upon the fact that if these models are fully learning foundational biological insights, if we take the model’s internal representation of cells, then cells that are biologically similar should be close in that internal representation, where cells that are biologically distinct should be further apart. And that is exactly what we tested in our study. We compared two popular single-cell foundation models and importantly, we compared these models against older and reliable tools that biologists have used for exploratory analyses. So these include simpler machine learning methods like scVI, statistical algorithms like Harmony, and even basic data pre-processing steps, just like filtering your data down to a more robust subset of genes, then. So basically, we tested embeddings from our two single-cell foundation models against this baseline in a variety of settings. And we tested the hypothesis that biologically similar cells should be similar across these distinct methods across these datasets.  HUIZINGA: Well, and as you as you did the testing, you obviously were aiming towards research findings, which is my favorite part of a research paper, so tell us what you did find and what you feel the most important takeaways of this paper are.  LU: Absolutely. So in a nutshell, we found that these two newly proposed single-cell foundation models substantially underperformed compared to older methods then. So to contextualize why that is such a surprising result, there is a lot of hype around these methods. So basically, I think that,yeah, it’s a very surprising result, given how hyped these models are and how people were already adopting them. But our results basically caution that these shouldn’t really be adopted for these use purposes.  HUIZINGA: Yeah, so this is serious real-world impact here in terms of if models are being adopted and adapted in these applications, how reliable are they, et cetera? So given that, who would you say benefits most from what you’ve discovered in this paper and why?  LU: Okay, so two ways, right? So I think this has at least immediate implications on the way that we do discovery in biology. And as I’ve discussed, these experiments are used for cases that have practical impact, drug discovery applications, investigations into basic biology, then. But let’s also talk about the impact for methodologists, people who are trying to improve these single-cell foundation models, right? I think at the base, they’re really excited proposals. Because if you look at what some of the prior and less sophisticated methods couldn’t do, they tended to be more bespoke. So the excitement of single-cell foundation models is that you have this general-purpose model that can be used for everything and while they’re not living up to that purpose just now, just currently, I think that it’s important that we continue to bank onto that vision, right? So if you look at our contributions in that area, where single-cell foundation models are a really new proposal, so it makes sense that we may not know how to fully evaluate them just yet then. So you can view our work as basically being a step towards more rigorous evaluation of these models. Now that we did this experiment, I think the methodologists know to use this as a signal on how to improve the models and if they’re going in the right direction. And in fact, you are seeing more and more papers adopt zero-shot evaluations since we put out our paper then. And so this essentially helps future computer scientists that are working on single-cell foundation models know how to train better models.  HUIZINGA: That said, Alex, finally, what are the outstanding challenges that you identified for zero-shot learning research in biology, and what foundation might this paper lay for future research agendas in the field?  LU: Yeah, absolutely. So now that we’ve shown single-cell foundation models don’t necessarily perform well, I think the natural question on everyone’s mind is how do we actually train single-cell foundation models that live up to that vision, that can perform in helping us discover new biology then? So I think in the short term, yeah, we’re actively investigating many hypotheses in this area. So for example, my colleagues, Lorin Crawford and Ava Amini, who were co-authors in the paper, recently put out a pre-print understanding how training data composition impacts model performance. And so one of the surprising findings that they had was that many of the training data sets that people used to train single-cell foundation models are highly redundant, to the point that you can even sample just a tiny fraction of the data and get basically the same performance then. But you can also look forward to many other explorations in this area as we continue to develop this research at the end of the day. But also zooming out into the bigger picture, I think one major takeaway from this paper is that developing AI methods for biology requires thought about the context of use, right? I mean, this is obvious for any AI method then, but I think people have gotten just too used to taking methods that work out there for natural vision or natural language maybe in the consumer domain and then extrapolating these methods to biology and expecting that they will work in the same way then, right? So for example, one reason why zero-shot evaluation was not routine practice for single-cell foundation models prior to our work, I mean, we were the first to fully establish that as a practice for the field, was because I think people who have been working in AI for biology have been looking to these more mainstream AI domains to shape their work then. And so with single-cell foundation models, many of these models are adopted from large language models with natural language processing, recycling the exact same architecture, the exact same code, basically just recycling practices in that field then. So when you look at like practices in like more mainstream domains, zero-shot evaluation is definitely explored in those domains, but it’s more of like a niche instead of being considered central to model understanding. So again, because biology is different from mainstream language processing, it’s a scientific discipline, zero-shot evaluation becomes much more important, and you have no choice but to use these models, zero-shot then. So in other words, I think that we need to be thinking carefully about what it is that makes training a model for biology different from training a model, for example, for consumer purposes.  [MUSIC] HUIZINGA: Alex Lu, thanks for joining us today, and to our listeners, thanks for tuning in. If you want to read this paper, you can find a link at aka.ms/Abstracts, or you can read it on the Genome Biology website. See you next time on Abstracts!  [MUSIC FADES] 
    0 Комментарии 0 Поделились 0 предпросмотр
  • How I achieved my dream career with Unity Learn Pathways

    Unity is on a mission to empower more learners to become real-time 3D creators. We made our online learning platform, Unity Learn, free for all in 2020 to give everyone the opportunity to access high-quality education and achieve their dream careers.Unity Learn Pathways are intensive online courses designed to take you from complete beginner to career-ready. To demonstrate this better than we ever could, we recently sat down with Pathways graduate Robbie Coey to chat about his experience with starting his own studio and working toward releasing his first game after finishing the Junior Programmer Pathway.Robbie Coeyis a founder and director of HoloMoon Games, an indie game studio based in Belfast, Northern Ireland. Robbie, K Andrews, and Michael McArdle founded Holomoon in September 2021 to create weird and wonderful narrative experiences. They’re currently working on Guitar Zeros, a narrative deck-builder about bringing a band from humble beginnings to the world stage.Keep reading to learn more about Robbie and the integral role Unity Learn has played in getting his career and studio off the ground.How did it feel when you completed the Junior Programmer Pathway?In a word, brilliant. It felt as though I finally had something that I was passionate about and could focus on. I could spend hours on various tutorials and building my own projects and it would feel as if no time had passed at all. The only other time I have that feeling is when playing games.How long did it take you to complete the course?It took around a month, and I completed it alongside part-time work. I advise anyone embarking on it to work little and often – you'll burn out if you try to do too much in a short time. It's easier to build the habit if you're able to work consistently over a long period, and if that means only doing half an hour every other day, that's what you do. Find a schedule that works for you and avoid burnout at all costs.What was your career before you started learning Unity?I had worked briefly in film and television in a range of roles on documentaries, dramas, and animations. I’d explored film and television a lot, and while there were things I enjoyed about working in that industry, I always felt a little out of place.What career challenges did you face?I felt as though I lacked hard skills. I was good at communicating and being a team player, but whether it was due to lack of confidence or something else, I always felt uncomfortable putting myself forward to do more technical work.What made you want to switch careers?The COVID-19 pandemic had dried up all opportunities in the industry I worked in previously. It was a move almost out of desperation. To even my older siblings, games were an idle pastime at most. Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry. I have met people and done things that I would never have dreamed of prior, as well as found a huge passion that continues to drive me to push myself further.“Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry.”Has the career change had an impact on your salary?It's a lot more stable, for one. I came from a work-for-hire industry, and immediately before learning Unity I was unemployed due to the pandemic. Having mostly done short contract work in the past, learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.“Learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.”Can you tell us about your new career?I’m now a director in my own studio. I was very lucky to receive funding from Northern Ireland Screen after completing my Unity Pathways course. With that initial investment I, along with two others, were able to start our own studio, HoloMoon Games. We want to make games that reflect our culture and make people laugh. We're currently working towards our first official release, Guitar Zeros, which will hopefully be on Steam sometime next year. And, I’ve recently become a BAFTA Connect member, which I never thought I could achieve. I keep wondering when they're going to realize and kick me out.“We're currently working towards our first official release.”Can you tell us how you secured funding for your project?We applied for an incubator scheme with Northern Ireland Screen called MiniGame, which involved written and in-person pitching. My advice for anyone looking to do the same would be to get comfortable talking about your game idea in front of others. One thing that helps is to ask three questions: Can I make this? Should I make this? And, do I want to make this? If I answer yes to all three, then I know I can comfortably pitch that idea. In general, I'd recommend keeping an eye out for funding opportunities, especially those provided by local organizations in your area. Without the support from Northern Ireland Screen, I wouldn't be in the position I am now.Why do you think learning real-time 3D and Unity is so important?For me, it unlocked so many ways in which I could express myself, and also allowed me to understand the digital world we live in. After I started learning Unity, I began to see it and real-time 3D technologies everywhere, from film and TV to the automotive industry. Real-time 3D is really becoming ubiquitous, and understanding how it works means you won't get left behind.“After I started learning Unity, I began to see it and real-time 3D technologies everywhere.”Has learning Unity had an impact on your life and career?It has completely changed the trajectory of my life and career, given me skills I never thought I had, and ignited a passion for games and programming that I didn't know was there. It made it possible for me to access a new industry which, to even my parents’ generation, seemed esoteric and mysterious. My life and career are infinitely more interesting since I completed the Unity Pathway.“My life and career are infinitely more interesting since I completed the Unity Pathway.”What are your plans for the future?I would like to continue running my own company, improve my craft, make interesting games that I can be proud of, and really try to push the storytelling of the medium forward. Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.“Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.”What advice would you give to anyone learning Unity?Rome wasn't built in a day. You won't learn everything about Unity overnight, but you also don't need to learn everything about Unity to get creative. In fact, I find setting yourself limitations can oftentimes make you more creative. You will get the knowledge you want with hard work and dedication, and there's no point rushing it. Also, network – find peers that are at your level and find others that are where you want to be in the future. There's a great community of people out there and they all want to lift each other up.You mentioned finding your peers. How did you go about doing this? Do you have any advice for anyone trying to find a community?The best source for me to find other game developers was through the Northern Ireland Game Developer Network. I would keep an eye out for local developer networks or more specific communities related to what you would like to do. Discord is a great meeting point for many of these groups, including Unity's own Discord server. Partaking in game jams is also a great way of meeting people. Itch.io has a terrific list of upcoming jams that suit all sorts of game developers, most of which will have some kind of forum to meet others who are participating.With Pathways, you can build all of the skills you need to master Unity and join the real-time 3D industry, just like Robbie. These free online courses cover everything from downloading and installing the Unity Editor to coding, VR development, lighting and shading, and more.Junior Programmer is designed for anyone interested in learning to code or obtaining an entry-level Unity role. In this free, fully virtual, self-guided course, you will learn about fundamental programming concepts such as variables, functions, and basic logic through two practical projects. You’ll also join a community of Unity learners enrolled in your Pathway where you can share your progress, get help, and interact with Unity's learning team.Follow HoloMoon Games’ progress on Twitter and don’t forget to wishlist Guitar Zeros on Steam. Did learning Unity help you achieve your dream career? If you’d be interested in sharing your story, please complete the following form for the chance to be featured: Share your Unity journey.
    #how #achieved #dream #career #with
    How I achieved my dream career with Unity Learn Pathways
    Unity is on a mission to empower more learners to become real-time 3D creators. We made our online learning platform, Unity Learn, free for all in 2020 to give everyone the opportunity to access high-quality education and achieve their dream careers.Unity Learn Pathways are intensive online courses designed to take you from complete beginner to career-ready. To demonstrate this better than we ever could, we recently sat down with Pathways graduate Robbie Coey to chat about his experience with starting his own studio and working toward releasing his first game after finishing the Junior Programmer Pathway.Robbie Coeyis a founder and director of HoloMoon Games, an indie game studio based in Belfast, Northern Ireland. Robbie, K Andrews, and Michael McArdle founded Holomoon in September 2021 to create weird and wonderful narrative experiences. They’re currently working on Guitar Zeros, a narrative deck-builder about bringing a band from humble beginnings to the world stage.Keep reading to learn more about Robbie and the integral role Unity Learn has played in getting his career and studio off the ground.How did it feel when you completed the Junior Programmer Pathway?In a word, brilliant. It felt as though I finally had something that I was passionate about and could focus on. I could spend hours on various tutorials and building my own projects and it would feel as if no time had passed at all. The only other time I have that feeling is when playing games.How long did it take you to complete the course?It took around a month, and I completed it alongside part-time work. I advise anyone embarking on it to work little and often – you'll burn out if you try to do too much in a short time. It's easier to build the habit if you're able to work consistently over a long period, and if that means only doing half an hour every other day, that's what you do. Find a schedule that works for you and avoid burnout at all costs.What was your career before you started learning Unity?I had worked briefly in film and television in a range of roles on documentaries, dramas, and animations. I’d explored film and television a lot, and while there were things I enjoyed about working in that industry, I always felt a little out of place.What career challenges did you face?I felt as though I lacked hard skills. I was good at communicating and being a team player, but whether it was due to lack of confidence or something else, I always felt uncomfortable putting myself forward to do more technical work.What made you want to switch careers?The COVID-19 pandemic had dried up all opportunities in the industry I worked in previously. It was a move almost out of desperation. To even my older siblings, games were an idle pastime at most. Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry. I have met people and done things that I would never have dreamed of prior, as well as found a huge passion that continues to drive me to push myself further.“Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry.”Has the career change had an impact on your salary?It's a lot more stable, for one. I came from a work-for-hire industry, and immediately before learning Unity I was unemployed due to the pandemic. Having mostly done short contract work in the past, learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.“Learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.”Can you tell us about your new career?I’m now a director in my own studio. I was very lucky to receive funding from Northern Ireland Screen after completing my Unity Pathways course. With that initial investment I, along with two others, were able to start our own studio, HoloMoon Games. We want to make games that reflect our culture and make people laugh. We're currently working towards our first official release, Guitar Zeros, which will hopefully be on Steam sometime next year. And, I’ve recently become a BAFTA Connect member, which I never thought I could achieve. I keep wondering when they're going to realize and kick me out.“We're currently working towards our first official release.”Can you tell us how you secured funding for your project?We applied for an incubator scheme with Northern Ireland Screen called MiniGame, which involved written and in-person pitching. My advice for anyone looking to do the same would be to get comfortable talking about your game idea in front of others. One thing that helps is to ask three questions: Can I make this? Should I make this? And, do I want to make this? If I answer yes to all three, then I know I can comfortably pitch that idea. In general, I'd recommend keeping an eye out for funding opportunities, especially those provided by local organizations in your area. Without the support from Northern Ireland Screen, I wouldn't be in the position I am now.Why do you think learning real-time 3D and Unity is so important?For me, it unlocked so many ways in which I could express myself, and also allowed me to understand the digital world we live in. After I started learning Unity, I began to see it and real-time 3D technologies everywhere, from film and TV to the automotive industry. Real-time 3D is really becoming ubiquitous, and understanding how it works means you won't get left behind.“After I started learning Unity, I began to see it and real-time 3D technologies everywhere.”Has learning Unity had an impact on your life and career?It has completely changed the trajectory of my life and career, given me skills I never thought I had, and ignited a passion for games and programming that I didn't know was there. It made it possible for me to access a new industry which, to even my parents’ generation, seemed esoteric and mysterious. My life and career are infinitely more interesting since I completed the Unity Pathway.“My life and career are infinitely more interesting since I completed the Unity Pathway.”What are your plans for the future?I would like to continue running my own company, improve my craft, make interesting games that I can be proud of, and really try to push the storytelling of the medium forward. Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.“Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.”What advice would you give to anyone learning Unity?Rome wasn't built in a day. You won't learn everything about Unity overnight, but you also don't need to learn everything about Unity to get creative. In fact, I find setting yourself limitations can oftentimes make you more creative. You will get the knowledge you want with hard work and dedication, and there's no point rushing it. Also, network – find peers that are at your level and find others that are where you want to be in the future. There's a great community of people out there and they all want to lift each other up.You mentioned finding your peers. How did you go about doing this? Do you have any advice for anyone trying to find a community?The best source for me to find other game developers was through the Northern Ireland Game Developer Network. I would keep an eye out for local developer networks or more specific communities related to what you would like to do. Discord is a great meeting point for many of these groups, including Unity's own Discord server. Partaking in game jams is also a great way of meeting people. Itch.io has a terrific list of upcoming jams that suit all sorts of game developers, most of which will have some kind of forum to meet others who are participating.With Pathways, you can build all of the skills you need to master Unity and join the real-time 3D industry, just like Robbie. These free online courses cover everything from downloading and installing the Unity Editor to coding, VR development, lighting and shading, and more.Junior Programmer is designed for anyone interested in learning to code or obtaining an entry-level Unity role. In this free, fully virtual, self-guided course, you will learn about fundamental programming concepts such as variables, functions, and basic logic through two practical projects. You’ll also join a community of Unity learners enrolled in your Pathway where you can share your progress, get help, and interact with Unity's learning team.Follow HoloMoon Games’ progress on Twitter and don’t forget to wishlist Guitar Zeros on Steam. Did learning Unity help you achieve your dream career? If you’d be interested in sharing your story, please complete the following form for the chance to be featured: Share your Unity journey. #how #achieved #dream #career #with
    UNITY.COM
    How I achieved my dream career with Unity Learn Pathways
    Unity is on a mission to empower more learners to become real-time 3D creators. We made our online learning platform, Unity Learn, free for all in 2020 to give everyone the opportunity to access high-quality education and achieve their dream careers.Unity Learn Pathways are intensive online courses designed to take you from complete beginner to career-ready. To demonstrate this better than we ever could, we recently sat down with Pathways graduate Robbie Coey to chat about his experience with starting his own studio and working toward releasing his first game after finishing the Junior Programmer Pathway.Robbie Coey (he/him) is a founder and director of HoloMoon Games, an indie game studio based in Belfast, Northern Ireland. Robbie, K Andrews, and Michael McArdle founded Holomoon in September 2021 to create weird and wonderful narrative experiences. They’re currently working on Guitar Zeros, a narrative deck-builder about bringing a band from humble beginnings to the world stage.Keep reading to learn more about Robbie and the integral role Unity Learn has played in getting his career and studio off the ground.How did it feel when you completed the Junior Programmer Pathway?In a word, brilliant. It felt as though I finally had something that I was passionate about and could focus on. I could spend hours on various tutorials and building my own projects and it would feel as if no time had passed at all. The only other time I have that feeling is when playing games.How long did it take you to complete the course?It took around a month, and I completed it alongside part-time work. I advise anyone embarking on it to work little and often – you'll burn out if you try to do too much in a short time. It's easier to build the habit if you're able to work consistently over a long period, and if that means only doing half an hour every other day, that's what you do. Find a schedule that works for you and avoid burnout at all costs.What was your career before you started learning Unity?I had worked briefly in film and television in a range of roles on documentaries, dramas, and animations. I’d explored film and television a lot, and while there were things I enjoyed about working in that industry, I always felt a little out of place.What career challenges did you face?I felt as though I lacked hard skills. I was good at communicating and being a team player, but whether it was due to lack of confidence or something else, I always felt uncomfortable putting myself forward to do more technical work.What made you want to switch careers?The COVID-19 pandemic had dried up all opportunities in the industry I worked in previously. It was a move almost out of desperation. To even my older siblings, games were an idle pastime at most. Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry. I have met people and done things that I would never have dreamed of prior, as well as found a huge passion that continues to drive me to push myself further.“Unity Pathways and the support from Unity really showed me how much of an opportunity there was in the games industry.”Has the career change had an impact on your salary?It's a lot more stable, for one. I came from a work-for-hire industry, and immediately before learning Unity I was unemployed due to the pandemic. Having mostly done short contract work in the past, learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.“Learning Unity has allowed me a lot more financial freedom and opportunities to increase my salary.”Can you tell us about your new career?I’m now a director in my own studio. I was very lucky to receive funding from Northern Ireland Screen after completing my Unity Pathways course. With that initial investment I, along with two others, were able to start our own studio, HoloMoon Games. We want to make games that reflect our culture and make people laugh. We're currently working towards our first official release, Guitar Zeros, which will hopefully be on Steam sometime next year. And, I’ve recently become a BAFTA Connect member, which I never thought I could achieve. I keep wondering when they're going to realize and kick me out.“We're currently working towards our first official release.”Can you tell us how you secured funding for your project?We applied for an incubator scheme with Northern Ireland Screen called MiniGame, which involved written and in-person pitching. My advice for anyone looking to do the same would be to get comfortable talking about your game idea in front of others. One thing that helps is to ask three questions: Can I make this? Should I make this? And, do I want to make this? If I answer yes to all three, then I know I can comfortably pitch that idea. In general, I'd recommend keeping an eye out for funding opportunities, especially those provided by local organizations in your area. Without the support from Northern Ireland Screen, I wouldn't be in the position I am now.Why do you think learning real-time 3D and Unity is so important?For me, it unlocked so many ways in which I could express myself, and also allowed me to understand the digital world we live in. After I started learning Unity, I began to see it and real-time 3D technologies everywhere, from film and TV to the automotive industry. Real-time 3D is really becoming ubiquitous, and understanding how it works means you won't get left behind.“After I started learning Unity, I began to see it and real-time 3D technologies everywhere.”Has learning Unity had an impact on your life and career?It has completely changed the trajectory of my life and career, given me skills I never thought I had, and ignited a passion for games and programming that I didn't know was there. It made it possible for me to access a new industry which, to even my parents’ generation, seemed esoteric and mysterious. My life and career are infinitely more interesting since I completed the Unity Pathway.“My life and career are infinitely more interesting since I completed the Unity Pathway.”What are your plans for the future?I would like to continue running my own company, improve my craft, make interesting games that I can be proud of, and really try to push the storytelling of the medium forward. Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.“Games are unique in the way that they tell stories, and I feel there is still a lot to learn about what kind of experiences they are able to create.”What advice would you give to anyone learning Unity?Rome wasn't built in a day. You won't learn everything about Unity overnight, but you also don't need to learn everything about Unity to get creative. In fact, I find setting yourself limitations can oftentimes make you more creative. You will get the knowledge you want with hard work and dedication, and there's no point rushing it. Also, network – find peers that are at your level and find others that are where you want to be in the future. There's a great community of people out there and they all want to lift each other up.You mentioned finding your peers. How did you go about doing this? Do you have any advice for anyone trying to find a community?The best source for me to find other game developers was through the Northern Ireland Game Developer Network. I would keep an eye out for local developer networks or more specific communities related to what you would like to do. Discord is a great meeting point for many of these groups, including Unity's own Discord server. Partaking in game jams is also a great way of meeting people. Itch.io has a terrific list of upcoming jams that suit all sorts of game developers, most of which will have some kind of forum to meet others who are participating.With Pathways, you can build all of the skills you need to master Unity and join the real-time 3D industry, just like Robbie. These free online courses cover everything from downloading and installing the Unity Editor to coding, VR development, lighting and shading, and more.Junior Programmer is designed for anyone interested in learning to code or obtaining an entry-level Unity role. In this free, fully virtual, self-guided course, you will learn about fundamental programming concepts such as variables, functions, and basic logic through two practical projects. You’ll also join a community of Unity learners enrolled in your Pathway where you can share your progress, get help, and interact with Unity's learning team.Follow HoloMoon Games’ progress on Twitter and don’t forget to wishlist Guitar Zeros on Steam. Did learning Unity help you achieve your dream career? If you’d be interested in sharing your story, please complete the following form for the chance to be featured: Share your Unity journey.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Inside the story that enraged OpenAI

    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review

    I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said.

    At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely.

    Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas.

    But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform.

    Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government. 

    So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company.

    Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.

    I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else?

    Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said.

    He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?”

    On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer.

    Why did we need AGI to do that instead of AI? I asked.

    This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI.

    And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on mosttasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it.

    AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care.

    Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.”

    This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us.

    “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.”

    That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival.

    “I actually think that’s a very beautiful thing,” he said.

    In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born.

    “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.”

    His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started.

    Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one.

    Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said.

    I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models.

    That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples.

    “It is unquestioningly very highly desirable that data centers be as green as possible,” he added.

    “No question,” Brockman quipped.

    “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue.

    “It’s 2 percent globally,” I offered.

    “Isn’t Bitcoin like 1 percent?” Brockman said.

    “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative.

    Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.”

    I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.”

    “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.”

    “The day we announced the deal,” he said, referring to Microsoft’s new billion investment, “Microsoft’s market cap went up by billion. People believe there is a positive ROI even just on short‑term technology.”

    OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.”

    Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step.

    He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself.

    There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees.

    This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public?

    At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said.

    In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future.

    “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.

    What motivated him? I asked Brockman.

    What are the chances that a transformative technology could arrive in your lifetime? he countered.

    He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said.

    Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him.

    A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point.

    In 2022, he became OpenAI’s president.

    During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said.

    OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.

    Brockman pointed once again to the billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone.

    Was there a historical example of a technology’s benefits that had been successfully distributed? I asked.

    “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative.

    “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards.

    “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly.

    “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.”

    His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”

    It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else.

    He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.

    “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.”

    In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

    Hours later, Elon Musk replied to the story with three tweets in rapid succession:

    “OpenAI should be more open imo”

    “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research.

    “All orgs developing advanced AI should be regulated, including Tesla”

    Afterward, Altman sent OpenAI employees an email.

    “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.”

    It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models.

    “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team.”

    OpenAI wouldn’t speak to me again for three years.

    From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao.
    #inside #story #that #enraged #openai
    Inside the story that enraged OpenAI
    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said. At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely. Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas. But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform. Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.  So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company. Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof. I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else? Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said. He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?” On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer. Why did we need AGI to do that instead of AI? I asked. This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI. And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on mosttasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it. AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care. Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.” This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us. “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.” That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival. “I actually think that’s a very beautiful thing,” he said. In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born. “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.” His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started. Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one. Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said. I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models. That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples. “It is unquestioningly very highly desirable that data centers be as green as possible,” he added. “No question,” Brockman quipped. “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue. “It’s 2 percent globally,” I offered. “Isn’t Bitcoin like 1 percent?” Brockman said. “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative. Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.” I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.” “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.” “The day we announced the deal,” he said, referring to Microsoft’s new billion investment, “Microsoft’s market cap went up by billion. People believe there is a positive ROI even just on short‑term technology.” OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.” Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step. He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself. There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees. This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public? At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said. In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future. “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me. What motivated him? I asked Brockman. What are the chances that a transformative technology could arrive in your lifetime? he countered. He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said. Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him. A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point. In 2022, he became OpenAI’s president. During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said. OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations. Brockman pointed once again to the billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone. Was there a historical example of a technology’s benefits that had been successfully distributed? I asked. “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative. “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards. “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly. “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.” His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.” It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else. He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said. “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.” In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.” Hours later, Elon Musk replied to the story with three tweets in rapid succession: “OpenAI should be more open imo” “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research. “All orgs developing advanced AI should be regulated, including Tesla” Afterward, Altman sent OpenAI employees an email. “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.” It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models. “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team.” OpenAI wouldn’t speak to me again for three years. From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao. #inside #story #that #enraged #openai
    WWW.TECHNOLOGYREVIEW.COM
    Inside the story that enraged OpenAI
    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said. At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely. Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas. But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform. Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.  So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company. Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof. I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else? Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said. He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?” On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer. Why did we need AGI to do that instead of AI? I asked. This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI. And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on most (economically valuable) tasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it. AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care. Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.” This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us. “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.” That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival. “I actually think that’s a very beautiful thing,” he said. In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born. “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.” His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started. Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one. Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said. I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models. That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples. “It is unquestioningly very highly desirable that data centers be as green as possible,” he added. “No question,” Brockman quipped. “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue. “It’s 2 percent globally,” I offered. “Isn’t Bitcoin like 1 percent?” Brockman said. “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative. Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.” I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.” “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.” “The day we announced the deal,” he said, referring to Microsoft’s new $1 billion investment, “Microsoft’s market cap went up by $10 billion. People believe there is a positive ROI even just on short‑term technology.” OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.” Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step. He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself. There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees. This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public? At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said. In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future. “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me. What motivated him? I asked Brockman. What are the chances that a transformative technology could arrive in your lifetime? he countered. He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said. Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him. A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point. In 2022, he became OpenAI’s president. During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said. OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations. Brockman pointed once again to the $10 billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone. Was there a historical example of a technology’s benefits that had been successfully distributed? I asked. “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative. “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards. “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly. “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.” His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.” It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else. He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said. “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.” In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.” Hours later, Elon Musk replied to the story with three tweets in rapid succession: “OpenAI should be more open imo” “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research. “All orgs developing advanced AI should be regulated, including Tesla” Afterward, Altman sent OpenAI employees an email. “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.” It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models. “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team (but not give the press the public fight they’d love right now).” OpenAI wouldn’t speak to me again for three years. From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award

    Submitted by WA Contents
    Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award

    Pakistan Architecture News - May 19, 2025 - 04:22  

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Pakistani architect Yasmeen Lari has been awarded the 2025 Achievement Award by the Lisbon Architecture Triennale. Her more than 60-year career is a potent example of how design may be used to uplift people's quality of life, combat inequality, prevent ecological collapse, and create a more equitable future."Architecture has to change if it wants to remain relevant. Our work is not something only for the rich; poor communities all over the world need good design, because it is of even greater value to them," said Yasmeen Lari."That’s why I think my job is to rebuild lives: to create ‘poverty escape-ladders’ by losing control of the process through co-building and co-creation. We do this by sharing knowledge and mobilising villages – one village at a time."Image courtesy of Al Jazeera websiteLari, who was born in Pakistan in 1941, attended Oxford to study architecture. She became the first female architect in Pakistan when she went home after graduation and opened her own practice. Yasmeen Lari retired from her architectural practice in 2000 after a prosperous career in Karachi. She then concentrated on the Heritage Foundation of Pakistan, which is committed to conserving and advancing regional, sustainable, and vernacular architecture. Lari once again broadened her profession following a disastrous earthquake in 2005, adopting what she calls a bottom-up, "humanistic humanitarian action" and redefining the function of modern architecture, particularly in regions severely impacted by socioeconomic and climate-related issues. Women's Centre in Darya Khan, Pakistan, in 2011Following her "four zeros" philosophy—zero carbon, zero waste, zero donations, and zero poverty—Yasmeen Lari promised to assist in the construction of over a million homes in response to the devastating floods that hit Pakistan in 2022. Lari's subsequent career is genuinely remarkable because it accomplished this goal without the need for outside financial aid, philanthropy, or patrons. Yasmeen Lari and Nayeem Shah look at the roof of the Disaster Risk Reduction Centre. Image courtesy of Heritage Foundation of PakistanAt the Triennale 2025 opening days on October 02–04, Yasmeen Lari will give a public talk and accept the Lisbon Triennale Millennium bcp Awards trophy, which was created by Álvaro Siza from leftover marble from Estremoz, Portugal.The jury of the Début and Achievement Awards is comprised by architects Inês Lobo, Lígia Nobre, Samia Henni, Sandi Hilal, and Yuma Shinohara. The three Lisbon Triennale Millennium bcp Awards – Achievement, Début and Universities – aim to promote groundbreaking world architecture by recognising those who make it. From transdisciplinary research developed in an academic setting, to emerging talent and established practices.The top image in the article © Yasmeen Lari © Heritage Foundation of Pakistan. > via Lisbon Triennale 
    #yasmeen #lari #awarded #lisbon #triennale
    Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award
    Submitted by WA Contents Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award Pakistan Architecture News - May 19, 2025 - 04:22   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Pakistani architect Yasmeen Lari has been awarded the 2025 Achievement Award by the Lisbon Architecture Triennale. Her more than 60-year career is a potent example of how design may be used to uplift people's quality of life, combat inequality, prevent ecological collapse, and create a more equitable future."Architecture has to change if it wants to remain relevant. Our work is not something only for the rich; poor communities all over the world need good design, because it is of even greater value to them," said Yasmeen Lari."That’s why I think my job is to rebuild lives: to create ‘poverty escape-ladders’ by losing control of the process through co-building and co-creation. We do this by sharing knowledge and mobilising villages – one village at a time."Image courtesy of Al Jazeera websiteLari, who was born in Pakistan in 1941, attended Oxford to study architecture. She became the first female architect in Pakistan when she went home after graduation and opened her own practice. Yasmeen Lari retired from her architectural practice in 2000 after a prosperous career in Karachi. She then concentrated on the Heritage Foundation of Pakistan, which is committed to conserving and advancing regional, sustainable, and vernacular architecture. Lari once again broadened her profession following a disastrous earthquake in 2005, adopting what she calls a bottom-up, "humanistic humanitarian action" and redefining the function of modern architecture, particularly in regions severely impacted by socioeconomic and climate-related issues. Women's Centre in Darya Khan, Pakistan, in 2011Following her "four zeros" philosophy—zero carbon, zero waste, zero donations, and zero poverty—Yasmeen Lari promised to assist in the construction of over a million homes in response to the devastating floods that hit Pakistan in 2022. Lari's subsequent career is genuinely remarkable because it accomplished this goal without the need for outside financial aid, philanthropy, or patrons. Yasmeen Lari and Nayeem Shah look at the roof of the Disaster Risk Reduction Centre. Image courtesy of Heritage Foundation of PakistanAt the Triennale 2025 opening days on October 02–04, Yasmeen Lari will give a public talk and accept the Lisbon Triennale Millennium bcp Awards trophy, which was created by Álvaro Siza from leftover marble from Estremoz, Portugal.The jury of the Début and Achievement Awards is comprised by architects Inês Lobo, Lígia Nobre, Samia Henni, Sandi Hilal, and Yuma Shinohara. The three Lisbon Triennale Millennium bcp Awards – Achievement, Début and Universities – aim to promote groundbreaking world architecture by recognising those who make it. From transdisciplinary research developed in an academic setting, to emerging talent and established practices.The top image in the article © Yasmeen Lari © Heritage Foundation of Pakistan. > via Lisbon Triennale  #yasmeen #lari #awarded #lisbon #triennale
    WORLDARCHITECTURE.ORG
    Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award
    Submitted by WA Contents Yasmeen Lari is awarded the 2025 Lisbon Triennale Millennium Achievement Award Pakistan Architecture News - May 19, 2025 - 04:22   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Pakistani architect Yasmeen Lari has been awarded the 2025 Achievement Award by the Lisbon Architecture Triennale. Her more than 60-year career is a potent example of how design may be used to uplift people's quality of life, combat inequality, prevent ecological collapse, and create a more equitable future."Architecture has to change if it wants to remain relevant. Our work is not something only for the rich; poor communities all over the world need good design, because it is of even greater value to them," said Yasmeen Lari."That’s why I think my job is to rebuild lives: to create ‘poverty escape-ladders’ by losing control of the process through co-building and co-creation. We do this by sharing knowledge and mobilising villages – one village at a time."Image courtesy of Al Jazeera websiteLari, who was born in Pakistan in 1941, attended Oxford to study architecture. She became the first female architect in Pakistan when she went home after graduation and opened her own practice. Yasmeen Lari retired from her architectural practice in 2000 after a prosperous career in Karachi. She then concentrated on the Heritage Foundation of Pakistan, which is committed to conserving and advancing regional, sustainable, and vernacular architecture. Lari once again broadened her profession following a disastrous earthquake in 2005, adopting what she calls a bottom-up, "humanistic humanitarian action" and redefining the function of modern architecture, particularly in regions severely impacted by socioeconomic and climate-related issues. Women's Centre in Darya Khan, Pakistan, in 2011Following her "four zeros" philosophy—zero carbon, zero waste, zero donations, and zero poverty—Yasmeen Lari promised to assist in the construction of over a million homes in response to the devastating floods that hit Pakistan in 2022. Lari's subsequent career is genuinely remarkable because it accomplished this goal without the need for outside financial aid, philanthropy, or patrons. Yasmeen Lari and Nayeem Shah look at the roof of the Disaster Risk Reduction Centre. Image courtesy of Heritage Foundation of PakistanAt the Triennale 2025 opening days on October 02–04, Yasmeen Lari will give a public talk and accept the Lisbon Triennale Millennium bcp Awards trophy, which was created by Álvaro Siza from leftover marble from Estremoz, Portugal.The jury of the Début and Achievement Awards is comprised by architects Inês Lobo, Lígia Nobre, Samia Henni, Sandi Hilal, and Yuma Shinohara. The three Lisbon Triennale Millennium bcp Awards – Achievement, Début and Universities – aim to promote groundbreaking world architecture by recognising those who make it. From transdisciplinary research developed in an academic setting, to emerging talent and established practices.The top image in the article © Yasmeen Lari © Heritage Foundation of Pakistan. > via Lisbon Triennale 
    0 Комментарии 0 Поделились 0 предпросмотр
  • The End of the Universe May Arrive Surprisingly Soon

    May 16, 20253 min readThe Universe May End Sooner Than Scientists Had ExpectedA new study suggests the universe's end could occur much sooner than previously thought. But don't worry, that ultimate cosmic conclusion would still be in the unimaginably distant futureBy Sharmila Kuthunur & SPACE.com An illustration of the remnants of an ancient, dead planetary system orbiting a white dwarf star. New calculations suggest that white dwarfs and other long-lived celestial objects are decaying faster than previously realized. NASA/ZUMA Press Wire Service/ZUMAPRESS.com/Alamy Live NewsAs the story of our cosmos moves forward, stars will slowly burn out, planets will freeze over, and black holes will devour light itself. Eventually, on timescales so long humanity will never witness them, the universe will fade into darkness.But if you've ever wondered exactly when it all might end, you may find it oddly comforting, or perhaps a bit unsettling, to know that someone has actually done the math. As it turns out, this cosmic finale might arrive sooner than scientists previously thought.Don't worry, though — "sooner" still means a mind-bending 10 to the power of 78 years from now. That is a 1 followed by 78 zeros, which is unimaginably far into the future. However, in cosmic terms, this estimate is a dramatic advancement from the previous prediction of 10 to the power of 1,100 years, made by Falcke and his team in 2023.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today."The ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time," Heino Falcke, a theoretical astrophysicist at the Radboud University in the Netherlands, who led the new study, said in a statement.The team's new calculations focus on predicting when the universe's most enduring celestial objects — the glowing remnants of dead stars such as white dwarfs and neutron stars — will ultimately fade away.This gradual decay is driven by Hawking radiation, a concept proposed by physicist Stephen Hawking in the 1970s. The theory suggests a peculiar process occurs near the event horizon — the point of no return — around black holes. Normally, virtual pairs of particles are constantly created by what are known as quantum fluctuations. These particle pairs pop in and out of existence, rapidly annihilating each other. Near a black hole's event horizon, however, the intense gravitational field prevents such annihilation. Instead, the pair is separated: one particle, one carrying negative energy, falls into the black hole, reducing its mass, while the other escapes into space.Over incredibly long timescales, Hawking's theory suggests this process causes the black hole to slowly evaporate, eventually vanishing.Falcke and his team extended this idea beyond black holes to other compact objects with strong gravitational fields. They found that the "evaporation time" of other objects emitting Hawking radiation depends solely on their densities. This is because unlike black hole evaporation, which is driven by the presence of an event horizon, this more general form of decay is driven by the curvature of spacetime itself.The team's new findings, described in a paper published Mondayin the Journal of Cosmology and Astroparticle Physics on Monday, offer a new estimate for how long it takes white dwarf stars to dissolve into nothingness. Surprisingly, the team found that neutron stars and stellar-mass black holes decay over the same timescale: about 10 to the power of 67 years. This was unexpected, as black holes have stronger gravitational fields and were thought to evaporate faster."But black holes have no surface," Michael Wondrak, a postdoctoral researcher of astrophysics at Radboud University and a co-author of the study, said in the statement. "They reabsorb some of their own radiation, which inhibits the process."If even white dwarf stars and black holes eventually dissolve into nothing, what does that say about us? Perhaps it suggests meaning isn't found in permanence, but in the fleeting brilliance of asking questions like these — while the stars are still shining.Copyright 2025 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
    #end #universe #arrive #surprisingly #soon
    The End of the Universe May Arrive Surprisingly Soon
    May 16, 20253 min readThe Universe May End Sooner Than Scientists Had ExpectedA new study suggests the universe's end could occur much sooner than previously thought. But don't worry, that ultimate cosmic conclusion would still be in the unimaginably distant futureBy Sharmila Kuthunur & SPACE.com An illustration of the remnants of an ancient, dead planetary system orbiting a white dwarf star. New calculations suggest that white dwarfs and other long-lived celestial objects are decaying faster than previously realized. NASA/ZUMA Press Wire Service/ZUMAPRESS.com/Alamy Live NewsAs the story of our cosmos moves forward, stars will slowly burn out, planets will freeze over, and black holes will devour light itself. Eventually, on timescales so long humanity will never witness them, the universe will fade into darkness.But if you've ever wondered exactly when it all might end, you may find it oddly comforting, or perhaps a bit unsettling, to know that someone has actually done the math. As it turns out, this cosmic finale might arrive sooner than scientists previously thought.Don't worry, though — "sooner" still means a mind-bending 10 to the power of 78 years from now. That is a 1 followed by 78 zeros, which is unimaginably far into the future. However, in cosmic terms, this estimate is a dramatic advancement from the previous prediction of 10 to the power of 1,100 years, made by Falcke and his team in 2023.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today."The ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time," Heino Falcke, a theoretical astrophysicist at the Radboud University in the Netherlands, who led the new study, said in a statement.The team's new calculations focus on predicting when the universe's most enduring celestial objects — the glowing remnants of dead stars such as white dwarfs and neutron stars — will ultimately fade away.This gradual decay is driven by Hawking radiation, a concept proposed by physicist Stephen Hawking in the 1970s. The theory suggests a peculiar process occurs near the event horizon — the point of no return — around black holes. Normally, virtual pairs of particles are constantly created by what are known as quantum fluctuations. These particle pairs pop in and out of existence, rapidly annihilating each other. Near a black hole's event horizon, however, the intense gravitational field prevents such annihilation. Instead, the pair is separated: one particle, one carrying negative energy, falls into the black hole, reducing its mass, while the other escapes into space.Over incredibly long timescales, Hawking's theory suggests this process causes the black hole to slowly evaporate, eventually vanishing.Falcke and his team extended this idea beyond black holes to other compact objects with strong gravitational fields. They found that the "evaporation time" of other objects emitting Hawking radiation depends solely on their densities. This is because unlike black hole evaporation, which is driven by the presence of an event horizon, this more general form of decay is driven by the curvature of spacetime itself.The team's new findings, described in a paper published Mondayin the Journal of Cosmology and Astroparticle Physics on Monday, offer a new estimate for how long it takes white dwarf stars to dissolve into nothingness. Surprisingly, the team found that neutron stars and stellar-mass black holes decay over the same timescale: about 10 to the power of 67 years. This was unexpected, as black holes have stronger gravitational fields and were thought to evaporate faster."But black holes have no surface," Michael Wondrak, a postdoctoral researcher of astrophysics at Radboud University and a co-author of the study, said in the statement. "They reabsorb some of their own radiation, which inhibits the process."If even white dwarf stars and black holes eventually dissolve into nothing, what does that say about us? Perhaps it suggests meaning isn't found in permanence, but in the fleeting brilliance of asking questions like these — while the stars are still shining.Copyright 2025 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. #end #universe #arrive #surprisingly #soon
    WWW.SCIENTIFICAMERICAN.COM
    The End of the Universe May Arrive Surprisingly Soon
    May 16, 20253 min readThe Universe May End Sooner Than Scientists Had ExpectedA new study suggests the universe's end could occur much sooner than previously thought. But don't worry, that ultimate cosmic conclusion would still be in the unimaginably distant futureBy Sharmila Kuthunur & SPACE.com An illustration of the remnants of an ancient, dead planetary system orbiting a white dwarf star. New calculations suggest that white dwarfs and other long-lived celestial objects are decaying faster than previously realized. NASA/ZUMA Press Wire Service/ZUMAPRESS.com/Alamy Live NewsAs the story of our cosmos moves forward, stars will slowly burn out, planets will freeze over, and black holes will devour light itself. Eventually, on timescales so long humanity will never witness them, the universe will fade into darkness.But if you've ever wondered exactly when it all might end, you may find it oddly comforting, or perhaps a bit unsettling, to know that someone has actually done the math. As it turns out, this cosmic finale might arrive sooner than scientists previously thought.Don't worry, though — "sooner" still means a mind-bending 10 to the power of 78 years from now. That is a 1 followed by 78 zeros, which is unimaginably far into the future. However, in cosmic terms, this estimate is a dramatic advancement from the previous prediction of 10 to the power of 1,100 years, made by Falcke and his team in 2023.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today."The ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time," Heino Falcke, a theoretical astrophysicist at the Radboud University in the Netherlands, who led the new study, said in a statement.The team's new calculations focus on predicting when the universe's most enduring celestial objects — the glowing remnants of dead stars such as white dwarfs and neutron stars — will ultimately fade away.This gradual decay is driven by Hawking radiation, a concept proposed by physicist Stephen Hawking in the 1970s. The theory suggests a peculiar process occurs near the event horizon — the point of no return — around black holes. Normally, virtual pairs of particles are constantly created by what are known as quantum fluctuations. These particle pairs pop in and out of existence, rapidly annihilating each other. Near a black hole's event horizon, however, the intense gravitational field prevents such annihilation. Instead, the pair is separated: one particle, one carrying negative energy, falls into the black hole, reducing its mass, while the other escapes into space.Over incredibly long timescales, Hawking's theory suggests this process causes the black hole to slowly evaporate, eventually vanishing.Falcke and his team extended this idea beyond black holes to other compact objects with strong gravitational fields. They found that the "evaporation time" of other objects emitting Hawking radiation depends solely on their densities. This is because unlike black hole evaporation, which is driven by the presence of an event horizon, this more general form of decay is driven by the curvature of spacetime itself.The team's new findings, described in a paper published Monday (May 12) in the Journal of Cosmology and Astroparticle Physics on Monday (May 12), offer a new estimate for how long it takes white dwarf stars to dissolve into nothingness. Surprisingly, the team found that neutron stars and stellar-mass black holes decay over the same timescale: about 10 to the power of 67 years. This was unexpected, as black holes have stronger gravitational fields and were thought to evaporate faster."But black holes have no surface," Michael Wondrak, a postdoctoral researcher of astrophysics at Radboud University and a co-author of the study, said in the statement. "They reabsorb some of their own radiation, which inhibits the process."If even white dwarf stars and black holes eventually dissolve into nothing, what does that say about us? Perhaps it suggests meaning isn't found in permanence, but in the fleeting brilliance of asking questions like these — while the stars are still shining.Copyright 2025 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
    0 Комментарии 0 Поделились 0 предпросмотр
  • #333;">The Universe Will Fizzle Out Way Sooner Than Expected, Scientists Say

    By

    Passant Rabie
    Published May 13, 2025

    |
    Comments (1)

    |

    An illustration of a decaying neutron star.
    Daniëlle Futselaar/artsource.nl

    Around 13.8 billion years ago, a tiny but dense fireball gave birth to the vast cosmos that holds trillions of galaxies, including the Milky Way.
    But our universe is dying, and it’s happening at a much faster rate than scientists previously estimated, according to new research.
    The last stellar remnants of the universe will cease to exist in 10 to the power of 78 years (that’s a one with 78 zeros), according to a new estimate from a group of scientists at Radboud University in the Netherlands.
    That’s still a long way off from when the universe powers down for good—but it’s a far earlier fade-to-black moment than the previous 10 to the power of 1,100 years estimate.
    The new paper, published Monday in the Journal of Cosmology and Astroparticle Physics, is a follow-up to a previous study by the same group of researchers.
    In their 2023 study, black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom suggested that other objects, like neutron stars, could evaporate in much the same way as black holes.
    The original theory, developed by Stephen Hawking in 1974, proposed that radiation escaping near a black hole’s event horizon would gradually erode its mass over time.
    The phenomenon, known as Hawking radiation, remains one of the most surprising ideas about black holes to this day.
    Building on the theory of Hawking radiation, the researchers behind the new paper suggest that the process of erosion depends on the density of the object.
    They found that neutron stars and stellar black holes take roughly the same amount of time to decay, an estimated 10 to the power of 67 years.
    Although black holes have a stronger gravitational field that should cause them to evaporate faster, they also have no surface so they end up reabsorbing some of their own radiation, “which inhibits the process,” Wondrak said in a statement.
    The researchers then calculated how long various celestial bodies would take to evaporate via Hawking-like radiation, leading them to the abbreviated cosmic expiration date. “So the ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time,” Falcke said.
    The study also estimates that it would take the Moon around 10 to the power of 90 years to evaporate based on Hawking radiation.
    “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory, and perhaps one day, we unravel the mystery of Hawking radiation,” van Suijlekom said.
    Daily Newsletter
    You May Also Like
    By

    Isaac Schultz
    Published May 11, 2025

    By

    Passant Rabie
    Published March 20, 2025

    By

    Passant Rabie
    Published February 10, 2025

    By

    Isaac Schultz
    Published February 2, 2025

    By

    Margherita Bassi
    Published February 1, 2025

    By

    Isaac Schultz
    Published January 28, 2025

    #666;">المصدر: https://gizmodo.com/the-universe-will-fizzle-out-way-sooner-than-expected-scientists-say-2000601411" style="color: #0066cc; text-decoration: none;">gizmodo.com
    #0066cc;">#the #universe #will #fizzle #out #way #sooner #than #expected #scientists #say #passant #rabie #published #may #comments #illustration #decaying #neutron #stardaniëlle #futselaarartsourcenl #around #billion #years #ago #tiny #but #dense #fireball #gave #birth #vast #cosmos #that #holds #trillions #galaxies #including #milky #waybut #our #dying #and #its #happening #much #faster #rate #previously #estimated #according #new #research #last #stellar #remnants #cease #exist #power #thats #one #with #zeros #estimate #from #group #radboud #university #netherlandsthats #still #long #off #when #powers #down #for #goodbut #far #earlier #fadetoblack #moment #previous #estimatethe #paper #monday #journal #cosmology #astroparticle #physics #followup #study #same #researchersin #their #black #hole #expert #heino #falcke #quantum #physicist #michael #wondrak #mathematician #walter #van #suijlekom #suggested #other #objects #like #stars #could #evaporate #holesthe #original #theory #developed #stephen #hawking #proposed #radiation #escaping #near #holes #event #horizon #would #gradually #erode #mass #over #timethe #phenomenon #known #remains #most #surprising #ideas #about #this #daybuilding #researchers #behind #suggest #process #erosion #depends #density #objectthey #found #take #roughly #amount #time #decay #yearsalthough #have #stronger #gravitational #field #should #cause #them #they #also #surface #end #reabsorbing #some #own #which #inhibits #said #statementthe #then #calculated #how #various #celestial #bodies #via #hawkinglike #leading #abbreviated #cosmic #expiration #dateso #ultimate #comes #fortunately #takes #very #saidthe #estimates #moon #based #radiationby #asking #these #kinds #questions #looking #extreme #cases #want #better #understand #perhaps #day #unravel #mystery #saiddaily #newsletteryou #isaac #schultz #march #february #margherita #bassi #january
    The Universe Will Fizzle Out Way Sooner Than Expected, Scientists Say
    By Passant Rabie Published May 13, 2025 | Comments (1) | An illustration of a decaying neutron star. Daniëlle Futselaar/artsource.nl Around 13.8 billion years ago, a tiny but dense fireball gave birth to the vast cosmos that holds trillions of galaxies, including the Milky Way. But our universe is dying, and it’s happening at a much faster rate than scientists previously estimated, according to new research. The last stellar remnants of the universe will cease to exist in 10 to the power of 78 years (that’s a one with 78 zeros), according to a new estimate from a group of scientists at Radboud University in the Netherlands. That’s still a long way off from when the universe powers down for good—but it’s a far earlier fade-to-black moment than the previous 10 to the power of 1,100 years estimate. The new paper, published Monday in the Journal of Cosmology and Astroparticle Physics, is a follow-up to a previous study by the same group of researchers. In their 2023 study, black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom suggested that other objects, like neutron stars, could evaporate in much the same way as black holes. The original theory, developed by Stephen Hawking in 1974, proposed that radiation escaping near a black hole’s event horizon would gradually erode its mass over time. The phenomenon, known as Hawking radiation, remains one of the most surprising ideas about black holes to this day. Building on the theory of Hawking radiation, the researchers behind the new paper suggest that the process of erosion depends on the density of the object. They found that neutron stars and stellar black holes take roughly the same amount of time to decay, an estimated 10 to the power of 67 years. Although black holes have a stronger gravitational field that should cause them to evaporate faster, they also have no surface so they end up reabsorbing some of their own radiation, “which inhibits the process,” Wondrak said in a statement. The researchers then calculated how long various celestial bodies would take to evaporate via Hawking-like radiation, leading them to the abbreviated cosmic expiration date. “So the ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time,” Falcke said. The study also estimates that it would take the Moon around 10 to the power of 90 years to evaporate based on Hawking radiation. “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory, and perhaps one day, we unravel the mystery of Hawking radiation,” van Suijlekom said. Daily Newsletter You May Also Like By Isaac Schultz Published May 11, 2025 By Passant Rabie Published March 20, 2025 By Passant Rabie Published February 10, 2025 By Isaac Schultz Published February 2, 2025 By Margherita Bassi Published February 1, 2025 By Isaac Schultz Published January 28, 2025
    المصدر: gizmodo.com
    #the #universe #will #fizzle #out #way #sooner #than #expected #scientists #say #passant #rabie #published #may #comments #illustration #decaying #neutron #stardaniëlle #futselaarartsourcenl #around #billion #years #ago #tiny #but #dense #fireball #gave #birth #vast #cosmos #that #holds #trillions #galaxies #including #milky #waybut #our #dying #and #its #happening #much #faster #rate #previously #estimated #according #new #research #last #stellar #remnants #cease #exist #power #thats #one #with #zeros #estimate #from #group #radboud #university #netherlandsthats #still #long #off #when #powers #down #for #goodbut #far #earlier #fadetoblack #moment #previous #estimatethe #paper #monday #journal #cosmology #astroparticle #physics #followup #study #same #researchersin #their #black #hole #expert #heino #falcke #quantum #physicist #michael #wondrak #mathematician #walter #van #suijlekom #suggested #other #objects #like #stars #could #evaporate #holesthe #original #theory #developed #stephen #hawking #proposed #radiation #escaping #near #holes #event #horizon #would #gradually #erode #mass #over #timethe #phenomenon #known #remains #most #surprising #ideas #about #this #daybuilding #researchers #behind #suggest #process #erosion #depends #density #objectthey #found #take #roughly #amount #time #decay #yearsalthough #have #stronger #gravitational #field #should #cause #them #they #also #surface #end #reabsorbing #some #own #which #inhibits #said #statementthe #then #calculated #how #various #celestial #bodies #via #hawkinglike #leading #abbreviated #cosmic #expiration #dateso #ultimate #comes #fortunately #takes #very #saidthe #estimates #moon #based #radiationby #asking #these #kinds #questions #looking #extreme #cases #want #better #understand #perhaps #day #unravel #mystery #saiddaily #newsletteryou #isaac #schultz #march #february #margherita #bassi #january
    GIZMODO.COM
    The Universe Will Fizzle Out Way Sooner Than Expected, Scientists Say
    By Passant Rabie Published May 13, 2025 | Comments (1) | An illustration of a decaying neutron star. Daniëlle Futselaar/artsource.nl Around 13.8 billion years ago, a tiny but dense fireball gave birth to the vast cosmos that holds trillions of galaxies, including the Milky Way. But our universe is dying, and it’s happening at a much faster rate than scientists previously estimated, according to new research. The last stellar remnants of the universe will cease to exist in 10 to the power of 78 years (that’s a one with 78 zeros), according to a new estimate from a group of scientists at Radboud University in the Netherlands. That’s still a long way off from when the universe powers down for good—but it’s a far earlier fade-to-black moment than the previous 10 to the power of 1,100 years estimate. The new paper, published Monday in the Journal of Cosmology and Astroparticle Physics, is a follow-up to a previous study by the same group of researchers. In their 2023 study, black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom suggested that other objects, like neutron stars, could evaporate in much the same way as black holes. The original theory, developed by Stephen Hawking in 1974, proposed that radiation escaping near a black hole’s event horizon would gradually erode its mass over time. The phenomenon, known as Hawking radiation, remains one of the most surprising ideas about black holes to this day. Building on the theory of Hawking radiation, the researchers behind the new paper suggest that the process of erosion depends on the density of the object. They found that neutron stars and stellar black holes take roughly the same amount of time to decay, an estimated 10 to the power of 67 years. Although black holes have a stronger gravitational field that should cause them to evaporate faster, they also have no surface so they end up reabsorbing some of their own radiation, “which inhibits the process,” Wondrak said in a statement. The researchers then calculated how long various celestial bodies would take to evaporate via Hawking-like radiation, leading them to the abbreviated cosmic expiration date. “So the ultimate end of the universe comes much sooner than expected, but fortunately it still takes a very long time,” Falcke said. The study also estimates that it would take the Moon around 10 to the power of 90 years to evaporate based on Hawking radiation. “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory, and perhaps one day, we unravel the mystery of Hawking radiation,” van Suijlekom said. Daily Newsletter You May Also Like By Isaac Schultz Published May 11, 2025 By Passant Rabie Published March 20, 2025 By Passant Rabie Published February 10, 2025 By Isaac Schultz Published February 2, 2025 By Margherita Bassi Published February 1, 2025 By Isaac Schultz Published January 28, 2025
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com